Feb 23 10:06:05 crc systemd[1]: Starting Kubernetes Kubelet... Feb 23 10:06:05 crc restorecon[4703]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 10:06:05 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 10:06:06 crc restorecon[4703]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 10:06:06 crc restorecon[4703]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 23 10:06:06 crc kubenswrapper[4904]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 10:06:06 crc kubenswrapper[4904]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 23 10:06:06 crc kubenswrapper[4904]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 10:06:06 crc kubenswrapper[4904]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 10:06:06 crc kubenswrapper[4904]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 23 10:06:06 crc kubenswrapper[4904]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.985370 4904 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.990811 4904 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.990839 4904 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.990848 4904 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.990860 4904 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.990869 4904 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.990878 4904 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.990887 4904 feature_gate.go:330] unrecognized feature gate: Example Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.990897 4904 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.990906 4904 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.990914 4904 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.990922 4904 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.990930 4904 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.990939 4904 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.990947 4904 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.990956 4904 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.990964 4904 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.990972 4904 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.990979 4904 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.990987 4904 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.990995 4904 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991010 4904 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991018 4904 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991026 4904 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991033 4904 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991041 4904 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991048 4904 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991056 4904 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991064 4904 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991071 4904 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991079 4904 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991089 4904 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991099 4904 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991107 4904 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991115 4904 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991123 4904 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991131 4904 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991139 4904 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991146 4904 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991155 4904 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991163 4904 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991172 4904 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991182 4904 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991190 4904 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991198 4904 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991205 4904 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991213 4904 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991220 4904 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991228 4904 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991236 4904 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991243 4904 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991251 4904 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991259 4904 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991268 4904 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991278 4904 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991287 4904 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991296 4904 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991309 4904 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991319 4904 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991329 4904 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991339 4904 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991347 4904 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991356 4904 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991364 4904 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991372 4904 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991379 4904 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991387 4904 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991394 4904 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991403 4904 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991411 4904 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991421 4904 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.991431 4904 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994132 4904 flags.go:64] FLAG: --address="0.0.0.0" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994157 4904 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994168 4904 flags.go:64] FLAG: --anonymous-auth="true" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994178 4904 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994187 4904 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994193 4904 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994202 4904 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994210 4904 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994217 4904 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994223 4904 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994230 4904 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994236 4904 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994242 4904 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994248 4904 flags.go:64] FLAG: --cgroup-root="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994254 4904 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994261 4904 flags.go:64] FLAG: --client-ca-file="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994266 4904 flags.go:64] FLAG: --cloud-config="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994272 4904 flags.go:64] FLAG: --cloud-provider="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994278 4904 flags.go:64] FLAG: --cluster-dns="[]" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994288 4904 flags.go:64] FLAG: --cluster-domain="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994294 4904 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994300 4904 flags.go:64] FLAG: --config-dir="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994306 4904 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994313 4904 flags.go:64] FLAG: --container-log-max-files="5" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994322 4904 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994328 4904 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994334 4904 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994340 4904 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994346 4904 flags.go:64] FLAG: --contention-profiling="false" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994353 4904 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994359 4904 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994365 4904 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994372 4904 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.994379 4904 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995428 4904 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995445 4904 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995451 4904 flags.go:64] FLAG: --enable-load-reader="false" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995458 4904 flags.go:64] FLAG: --enable-server="true" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995464 4904 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995472 4904 flags.go:64] FLAG: --event-burst="100" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995479 4904 flags.go:64] FLAG: --event-qps="50" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995485 4904 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995492 4904 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995498 4904 flags.go:64] FLAG: --eviction-hard="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995506 4904 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995512 4904 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995518 4904 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995524 4904 flags.go:64] FLAG: --eviction-soft="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995530 4904 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995536 4904 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995542 4904 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995549 4904 flags.go:64] FLAG: --experimental-mounter-path="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995555 4904 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995561 4904 flags.go:64] FLAG: --fail-swap-on="true" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995567 4904 flags.go:64] FLAG: --feature-gates="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995574 4904 flags.go:64] FLAG: --file-check-frequency="20s" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995581 4904 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995587 4904 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995593 4904 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995600 4904 flags.go:64] FLAG: --healthz-port="10248" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995606 4904 flags.go:64] FLAG: --help="false" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995612 4904 flags.go:64] FLAG: --hostname-override="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995618 4904 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995624 4904 flags.go:64] FLAG: --http-check-frequency="20s" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995630 4904 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995636 4904 flags.go:64] FLAG: --image-credential-provider-config="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995642 4904 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995648 4904 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995655 4904 flags.go:64] FLAG: --image-service-endpoint="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995661 4904 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995667 4904 flags.go:64] FLAG: --kube-api-burst="100" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995673 4904 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995679 4904 flags.go:64] FLAG: --kube-api-qps="50" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995685 4904 flags.go:64] FLAG: --kube-reserved="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995691 4904 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995697 4904 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995703 4904 flags.go:64] FLAG: --kubelet-cgroups="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995709 4904 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995730 4904 flags.go:64] FLAG: --lock-file="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995736 4904 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995742 4904 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995749 4904 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995758 4904 flags.go:64] FLAG: --log-json-split-stream="false" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995764 4904 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995771 4904 flags.go:64] FLAG: --log-text-split-stream="false" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995777 4904 flags.go:64] FLAG: --logging-format="text" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995785 4904 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995792 4904 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995798 4904 flags.go:64] FLAG: --manifest-url="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995804 4904 flags.go:64] FLAG: --manifest-url-header="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995812 4904 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995819 4904 flags.go:64] FLAG: --max-open-files="1000000" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995827 4904 flags.go:64] FLAG: --max-pods="110" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995833 4904 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995839 4904 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995846 4904 flags.go:64] FLAG: --memory-manager-policy="None" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995853 4904 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995859 4904 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995865 4904 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995872 4904 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995886 4904 flags.go:64] FLAG: --node-status-max-images="50" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995892 4904 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995899 4904 flags.go:64] FLAG: --oom-score-adj="-999" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995905 4904 flags.go:64] FLAG: --pod-cidr="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995912 4904 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995922 4904 flags.go:64] FLAG: --pod-manifest-path="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995928 4904 flags.go:64] FLAG: --pod-max-pids="-1" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995935 4904 flags.go:64] FLAG: --pods-per-core="0" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995940 4904 flags.go:64] FLAG: --port="10250" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995947 4904 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995953 4904 flags.go:64] FLAG: --provider-id="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995959 4904 flags.go:64] FLAG: --qos-reserved="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995965 4904 flags.go:64] FLAG: --read-only-port="10255" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995971 4904 flags.go:64] FLAG: --register-node="true" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995978 4904 flags.go:64] FLAG: --register-schedulable="true" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995983 4904 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.995995 4904 flags.go:64] FLAG: --registry-burst="10" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996002 4904 flags.go:64] FLAG: --registry-qps="5" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996009 4904 flags.go:64] FLAG: --reserved-cpus="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996015 4904 flags.go:64] FLAG: --reserved-memory="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996023 4904 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996030 4904 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996036 4904 flags.go:64] FLAG: --rotate-certificates="false" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996042 4904 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996048 4904 flags.go:64] FLAG: --runonce="false" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996055 4904 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996061 4904 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996068 4904 flags.go:64] FLAG: --seccomp-default="false" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996074 4904 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996080 4904 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996086 4904 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996093 4904 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996100 4904 flags.go:64] FLAG: --storage-driver-password="root" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996106 4904 flags.go:64] FLAG: --storage-driver-secure="false" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996112 4904 flags.go:64] FLAG: --storage-driver-table="stats" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996118 4904 flags.go:64] FLAG: --storage-driver-user="root" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996124 4904 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996131 4904 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996137 4904 flags.go:64] FLAG: --system-cgroups="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996143 4904 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996153 4904 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996159 4904 flags.go:64] FLAG: --tls-cert-file="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996165 4904 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996175 4904 flags.go:64] FLAG: --tls-min-version="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996181 4904 flags.go:64] FLAG: --tls-private-key-file="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996187 4904 flags.go:64] FLAG: --topology-manager-policy="none" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996192 4904 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996198 4904 flags.go:64] FLAG: --topology-manager-scope="container" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996206 4904 flags.go:64] FLAG: --v="2" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996214 4904 flags.go:64] FLAG: --version="false" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996223 4904 flags.go:64] FLAG: --vmodule="" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996231 4904 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.996237 4904 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996384 4904 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996392 4904 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996399 4904 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996405 4904 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996411 4904 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996417 4904 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996423 4904 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996428 4904 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996434 4904 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996439 4904 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996444 4904 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996450 4904 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996455 4904 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996460 4904 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996466 4904 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996471 4904 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996477 4904 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996482 4904 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996489 4904 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996496 4904 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996503 4904 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996510 4904 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996516 4904 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996522 4904 feature_gate.go:330] unrecognized feature gate: Example Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996527 4904 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996533 4904 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996540 4904 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996545 4904 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996551 4904 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996556 4904 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996562 4904 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996567 4904 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996572 4904 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996577 4904 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996583 4904 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996588 4904 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996593 4904 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996599 4904 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996604 4904 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996609 4904 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996614 4904 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996620 4904 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996625 4904 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996630 4904 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996636 4904 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996641 4904 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996646 4904 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996651 4904 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996658 4904 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996665 4904 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996671 4904 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996676 4904 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996682 4904 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996688 4904 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996694 4904 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996700 4904 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996705 4904 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996710 4904 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996732 4904 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996740 4904 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996747 4904 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996753 4904 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996759 4904 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996765 4904 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996770 4904 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996776 4904 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996782 4904 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996787 4904 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996793 4904 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996798 4904 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 10:06:06 crc kubenswrapper[4904]: W0223 10:06:06.996804 4904 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 10:06:06 crc kubenswrapper[4904]: I0223 10:06:06.997489 4904 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.010393 4904 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.010452 4904 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010590 4904 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010604 4904 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010615 4904 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010624 4904 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010634 4904 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010643 4904 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010655 4904 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010667 4904 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010678 4904 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010688 4904 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010696 4904 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010705 4904 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010778 4904 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010791 4904 feature_gate.go:330] unrecognized feature gate: Example Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010806 4904 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010819 4904 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010830 4904 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010841 4904 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010851 4904 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010861 4904 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010869 4904 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010878 4904 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010887 4904 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010895 4904 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010903 4904 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010912 4904 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010920 4904 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010929 4904 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010937 4904 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010946 4904 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010954 4904 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010963 4904 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010972 4904 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010980 4904 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010989 4904 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.010997 4904 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011005 4904 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011014 4904 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011022 4904 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011030 4904 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011039 4904 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011050 4904 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011062 4904 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011073 4904 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011085 4904 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011096 4904 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011106 4904 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011114 4904 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011123 4904 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011135 4904 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011147 4904 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011157 4904 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011166 4904 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011174 4904 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011183 4904 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011192 4904 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011200 4904 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011209 4904 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011217 4904 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011225 4904 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011234 4904 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011242 4904 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011252 4904 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011261 4904 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011269 4904 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011278 4904 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011286 4904 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011295 4904 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011303 4904 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011311 4904 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011320 4904 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.011335 4904 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011563 4904 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011580 4904 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011591 4904 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011601 4904 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011611 4904 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011620 4904 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011628 4904 feature_gate.go:330] unrecognized feature gate: Example Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011636 4904 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011644 4904 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011653 4904 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011662 4904 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011670 4904 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011679 4904 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011688 4904 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011698 4904 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011708 4904 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011751 4904 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011762 4904 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011773 4904 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011783 4904 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011794 4904 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011804 4904 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011814 4904 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011824 4904 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011837 4904 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011848 4904 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011859 4904 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011869 4904 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011879 4904 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011889 4904 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011898 4904 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011908 4904 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011917 4904 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011926 4904 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011934 4904 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011944 4904 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011954 4904 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011963 4904 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011972 4904 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011981 4904 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011990 4904 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.011999 4904 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012007 4904 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012015 4904 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012026 4904 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012034 4904 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012042 4904 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012051 4904 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012059 4904 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012070 4904 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012082 4904 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012093 4904 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012101 4904 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012111 4904 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012120 4904 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012129 4904 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012138 4904 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012147 4904 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012155 4904 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012164 4904 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012173 4904 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012182 4904 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012190 4904 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012199 4904 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012207 4904 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012216 4904 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012224 4904 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012232 4904 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012241 4904 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012250 4904 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.012259 4904 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.012271 4904 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.012579 4904 server.go:940] "Client rotation is on, will bootstrap in background" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.021226 4904 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.021406 4904 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.023605 4904 server.go:997] "Starting client certificate rotation" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.023658 4904 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.023936 4904 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-12 02:59:18.407580152 +0000 UTC Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.024069 4904 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.050195 4904 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 10:06:07 crc kubenswrapper[4904]: E0223 10:06:07.053462 4904 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.057177 4904 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.075879 4904 log.go:25] "Validated CRI v1 runtime API" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.114517 4904 log.go:25] "Validated CRI v1 image API" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.116741 4904 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.122789 4904 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-23-10-00-36-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.122853 4904 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.157814 4904 manager.go:217] Machine: {Timestamp:2026-02-23 10:06:07.152638483 +0000 UTC m=+0.573012076 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a448a05d-7f8e-4a16-bfb9-e12591dd55db BootID:2177c7d2-fddd-4945-9ead-9ca47cb98812 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:f5:40:97 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:f5:40:97 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d8:1b:a0 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:51:a3:4e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:64:f7:e4 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:87:ac:b1 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:42:b4:79:1b:f8:dd Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:0a:86:c3:62:e7:0c Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.158905 4904 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.159262 4904 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.159892 4904 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.160196 4904 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.160241 4904 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.160588 4904 topology_manager.go:138] "Creating topology manager with none policy" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.160602 4904 container_manager_linux.go:303] "Creating device plugin manager" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.161192 4904 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.161217 4904 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.162103 4904 state_mem.go:36] "Initialized new in-memory state store" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.162203 4904 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.166705 4904 kubelet.go:418] "Attempting to sync node with API server" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.166744 4904 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.166785 4904 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.166803 4904 kubelet.go:324] "Adding apiserver pod source" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.166815 4904 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.170926 4904 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.172038 4904 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.175492 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Feb 23 10:06:07 crc kubenswrapper[4904]: E0223 10:06:07.175752 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.176154 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Feb 23 10:06:07 crc kubenswrapper[4904]: E0223 10:06:07.176367 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.178843 4904 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.181893 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.182055 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.182739 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.182864 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.182967 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.183061 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.183175 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.183268 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.183342 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.183427 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.183504 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.183579 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.185666 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.186843 4904 server.go:1280] "Started kubelet" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.186925 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.189424 4904 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 23 10:06:07 crc systemd[1]: Started Kubernetes Kubelet. Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.192512 4904 server.go:460] "Adding debug handlers to kubelet server" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.192502 4904 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.194072 4904 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.195216 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.195358 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 06:44:34.863333624 +0000 UTC Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.195586 4904 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.195877 4904 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.195925 4904 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 23 10:06:07 crc kubenswrapper[4904]: E0223 10:06:07.195914 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.196005 4904 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 23 10:06:07 crc kubenswrapper[4904]: E0223 10:06:07.196627 4904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" interval="200ms" Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.197949 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Feb 23 10:06:07 crc kubenswrapper[4904]: E0223 10:06:07.198125 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.198442 4904 factory.go:55] Registering systemd factory Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.198528 4904 factory.go:221] Registration of the systemd container factory successfully Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.199185 4904 factory.go:153] Registering CRI-O factory Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.199238 4904 factory.go:221] Registration of the crio container factory successfully Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.199409 4904 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.199462 4904 factory.go:103] Registering Raw factory Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.199512 4904 manager.go:1196] Started watching for new ooms in manager Feb 23 10:06:07 crc kubenswrapper[4904]: E0223 10:06:07.198175 4904 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.138:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1896d82768954afa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 10:06:07.186807546 +0000 UTC m=+0.607181069,LastTimestamp:2026-02-23 10:06:07.186807546 +0000 UTC m=+0.607181069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.206077 4904 manager.go:319] Starting recovery of all containers Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.210925 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.211030 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.211074 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.211100 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.211133 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.211157 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.211179 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.211212 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.211247 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.211279 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.211303 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.211337 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.211366 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.211408 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.211431 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.211465 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.211491 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.211512 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.211545 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.211568 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.211598 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.211622 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.211643 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.211674 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.211706 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.212077 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.212173 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.212228 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.212266 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.212319 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.212350 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.212405 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.212426 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.212491 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.212513 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.212570 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.212596 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.212815 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.212953 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.212981 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213020 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213042 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213075 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213103 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213124 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213152 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213174 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213202 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213233 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213261 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213297 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213327 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213429 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213471 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213500 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213528 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213552 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213581 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213608 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213639 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213673 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213703 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213778 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213807 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213850 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213915 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213954 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.213991 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214029 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214061 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214140 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214188 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214230 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214254 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214275 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214328 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214351 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214373 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214390 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214409 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214449 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214465 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214489 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214524 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214540 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214559 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214578 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214598 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214616 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214635 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214682 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214698 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214774 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214790 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214805 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214823 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214838 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214859 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214874 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214890 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214909 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214926 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.214951 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.215068 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.215122 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.215180 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.215213 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.215380 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.215423 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.215454 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.215490 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.215543 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.215571 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.215605 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.215631 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.215661 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.215704 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.215997 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.216054 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.216122 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.216143 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.216167 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.216186 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.216208 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.216228 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.216246 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.216268 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.216285 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.216304 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.216325 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.216341 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.216361 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.216376 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.216393 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.216413 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.216652 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.216699 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.216745 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.216768 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.216786 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.216855 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.216987 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.217262 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.217311 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.217338 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.217369 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.217386 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.217403 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.218469 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.218523 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.218546 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.218569 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.218590 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.218611 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.218630 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.218689 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.218710 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.218761 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.218783 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.218803 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.218822 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.218844 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.218866 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.218887 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.218908 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.218929 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.218950 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.218970 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.218991 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.219016 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.219037 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.219061 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.219087 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.219109 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.219130 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.219152 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.219174 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.219241 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.219310 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.219358 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.219392 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.219416 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.219438 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.219458 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.219480 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.219505 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.219527 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.219562 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.223946 4904 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.224160 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.224267 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.224354 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.224436 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.224527 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.224608 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.224693 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.224803 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.224884 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.224974 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.225055 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.225140 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.225236 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.225326 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.225405 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.225478 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.225555 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.225635 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.225755 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.225836 4904 reconstruct.go:97] "Volume reconstruction finished" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.225903 4904 reconciler.go:26] "Reconciler: start to sync state" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.244658 4904 manager.go:324] Recovery completed Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.249356 4904 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.253934 4904 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.254011 4904 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.254054 4904 kubelet.go:2335] "Starting kubelet main sync loop" Feb 23 10:06:07 crc kubenswrapper[4904]: E0223 10:06:07.254131 4904 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.255548 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Feb 23 10:06:07 crc kubenswrapper[4904]: E0223 10:06:07.255653 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.255979 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.257778 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.257821 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.257831 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.260037 4904 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.260060 4904 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.260080 4904 state_mem.go:36] "Initialized new in-memory state store" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.278371 4904 policy_none.go:49] "None policy: Start" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.279335 4904 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.279364 4904 state_mem.go:35] "Initializing new in-memory state store" Feb 23 10:06:07 crc kubenswrapper[4904]: E0223 10:06:07.296639 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.326957 4904 manager.go:334] "Starting Device Plugin manager" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.327187 4904 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.327259 4904 server.go:79] "Starting device plugin registration server" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.327765 4904 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.327783 4904 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.328100 4904 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.328383 4904 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.328409 4904 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 23 10:06:07 crc kubenswrapper[4904]: E0223 10:06:07.337061 4904 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.355129 4904 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.355577 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.357504 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.357557 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.357570 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.357763 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.358037 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.358072 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.358667 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.358686 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.358696 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.358823 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.358868 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.358905 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.358950 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.358963 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.359001 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.359753 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.359811 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.359837 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.359852 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.359857 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.360028 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.360530 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.360757 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.360836 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.361863 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.361934 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.361952 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.362307 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.362359 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.362679 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.366825 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.366958 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.367047 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.368252 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.368301 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.368323 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.368626 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.368655 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.368670 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.368932 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.368973 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.369842 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.369870 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.369890 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:07 crc kubenswrapper[4904]: E0223 10:06:07.397308 4904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" interval="400ms" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.428211 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.429051 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.429104 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.429130 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.429154 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.429207 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.429248 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.429272 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.429291 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.429308 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.429327 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.429345 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.429364 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.429409 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.429495 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.429497 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.429519 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.429529 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.429543 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.429572 4904 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 10:06:07 crc kubenswrapper[4904]: E0223 10:06:07.430112 4904 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.138:6443: connect: connection refused" node="crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.530310 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.530384 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.530427 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.530458 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.530489 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.530548 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.530598 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.530595 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.530645 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.530708 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.530705 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.530767 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.530819 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.530878 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.530882 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.530904 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.530921 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.530926 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.530937 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.530954 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.530965 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.530969 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.530983 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.530986 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.531002 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.531008 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.531020 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.531025 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.531041 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.531040 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.631054 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.632532 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.632563 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.632574 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.632598 4904 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 10:06:07 crc kubenswrapper[4904]: E0223 10:06:07.632859 4904 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.138:6443: connect: connection refused" node="crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.684896 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.701004 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.718675 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.734526 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: I0223 10:06:07.740394 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.740930 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-29a5836878a29aa84df78a8ab8f2b821ee162deff26bcbcfc3c80c42dbdb4aab WatchSource:0}: Error finding container 29a5836878a29aa84df78a8ab8f2b821ee162deff26bcbcfc3c80c42dbdb4aab: Status 404 returned error can't find the container with id 29a5836878a29aa84df78a8ab8f2b821ee162deff26bcbcfc3c80c42dbdb4aab Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.744612 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-c016c177f7bf7e1fcd55939cb62f353abd9e31a11a2c2c0f474d4a412f278afb WatchSource:0}: Error finding container c016c177f7bf7e1fcd55939cb62f353abd9e31a11a2c2c0f474d4a412f278afb: Status 404 returned error can't find the container with id c016c177f7bf7e1fcd55939cb62f353abd9e31a11a2c2c0f474d4a412f278afb Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.753309 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-b9080fb25285834616ce788dcd3579409182cf65d786a70ea9b4fc160f070b03 WatchSource:0}: Error finding container b9080fb25285834616ce788dcd3579409182cf65d786a70ea9b4fc160f070b03: Status 404 returned error can't find the container with id b9080fb25285834616ce788dcd3579409182cf65d786a70ea9b4fc160f070b03 Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.760121 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-e75dcd5dfbed39d54563ded9cd4599273751d1d291d2484c657d941a87260dc8 WatchSource:0}: Error finding container e75dcd5dfbed39d54563ded9cd4599273751d1d291d2484c657d941a87260dc8: Status 404 returned error can't find the container with id e75dcd5dfbed39d54563ded9cd4599273751d1d291d2484c657d941a87260dc8 Feb 23 10:06:07 crc kubenswrapper[4904]: W0223 10:06:07.760603 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-72787bff550980893679c4a9bb7f1246d772473c548264594aeec068ecc36581 WatchSource:0}: Error finding container 72787bff550980893679c4a9bb7f1246d772473c548264594aeec068ecc36581: Status 404 returned error can't find the container with id 72787bff550980893679c4a9bb7f1246d772473c548264594aeec068ecc36581 Feb 23 10:06:07 crc kubenswrapper[4904]: E0223 10:06:07.798323 4904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" interval="800ms" Feb 23 10:06:08 crc kubenswrapper[4904]: W0223 10:06:08.022659 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Feb 23 10:06:08 crc kubenswrapper[4904]: E0223 10:06:08.022757 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Feb 23 10:06:08 crc kubenswrapper[4904]: I0223 10:06:08.033725 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:08 crc kubenswrapper[4904]: I0223 10:06:08.035037 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:08 crc kubenswrapper[4904]: I0223 10:06:08.035071 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:08 crc kubenswrapper[4904]: I0223 10:06:08.035084 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:08 crc kubenswrapper[4904]: I0223 10:06:08.035109 4904 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 10:06:08 crc kubenswrapper[4904]: E0223 10:06:08.035696 4904 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.138:6443: connect: connection refused" node="crc" Feb 23 10:06:08 crc kubenswrapper[4904]: W0223 10:06:08.164092 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Feb 23 10:06:08 crc kubenswrapper[4904]: E0223 10:06:08.164169 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Feb 23 10:06:08 crc kubenswrapper[4904]: I0223 10:06:08.188288 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Feb 23 10:06:08 crc kubenswrapper[4904]: I0223 10:06:08.196402 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 12:51:44.835534287 +0000 UTC Feb 23 10:06:08 crc kubenswrapper[4904]: I0223 10:06:08.258751 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"29a5836878a29aa84df78a8ab8f2b821ee162deff26bcbcfc3c80c42dbdb4aab"} Feb 23 10:06:08 crc kubenswrapper[4904]: I0223 10:06:08.259691 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"72787bff550980893679c4a9bb7f1246d772473c548264594aeec068ecc36581"} Feb 23 10:06:08 crc kubenswrapper[4904]: I0223 10:06:08.261203 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e75dcd5dfbed39d54563ded9cd4599273751d1d291d2484c657d941a87260dc8"} Feb 23 10:06:08 crc kubenswrapper[4904]: I0223 10:06:08.262276 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b9080fb25285834616ce788dcd3579409182cf65d786a70ea9b4fc160f070b03"} Feb 23 10:06:08 crc kubenswrapper[4904]: I0223 10:06:08.263346 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c016c177f7bf7e1fcd55939cb62f353abd9e31a11a2c2c0f474d4a412f278afb"} Feb 23 10:06:08 crc kubenswrapper[4904]: W0223 10:06:08.324776 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Feb 23 10:06:08 crc kubenswrapper[4904]: E0223 10:06:08.324853 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Feb 23 10:06:08 crc kubenswrapper[4904]: W0223 10:06:08.548455 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Feb 23 10:06:08 crc kubenswrapper[4904]: E0223 10:06:08.548577 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Feb 23 10:06:08 crc kubenswrapper[4904]: E0223 10:06:08.599565 4904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" interval="1.6s" Feb 23 10:06:08 crc kubenswrapper[4904]: I0223 10:06:08.836142 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:08 crc kubenswrapper[4904]: I0223 10:06:08.838438 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:08 crc kubenswrapper[4904]: I0223 10:06:08.838472 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:08 crc kubenswrapper[4904]: I0223 10:06:08.838556 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:08 crc kubenswrapper[4904]: I0223 10:06:08.838586 4904 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 10:06:08 crc kubenswrapper[4904]: E0223 10:06:08.839066 4904 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.138:6443: connect: connection refused" node="crc" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.086708 4904 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 10:06:09 crc kubenswrapper[4904]: E0223 10:06:09.087763 4904 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.188100 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.196520 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 12:20:40.222676941 +0000 UTC Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.269012 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"28713307469ffc99bc8ff8278f201e6fb1857b0d04fd440a358b683ff0b71564"} Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.269064 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"150fc8552b9a7df5f0ad2c981decf9bd4c62f315ffcb6e351be3aec08618afff"} Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.269074 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"de5ee9acfaefb66a9c397abfa324c77757c7c2618457b7c1a91b43f705633418"} Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.269083 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"00034118ac46fa80f07637055d7140743737693c2fb6b0f4bc1924c40c19eb94"} Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.269114 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.270413 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.270467 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.270487 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.272090 4904 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90" exitCode=0 Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.272151 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90"} Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.272270 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.273756 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.273791 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.273804 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.274207 4904 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b" exitCode=0 Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.274303 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.274295 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b"} Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.275164 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.275209 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.275228 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.275813 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.276519 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.276614 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.276630 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.277769 4904 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609" exitCode=0 Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.277902 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.278078 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609"} Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.279224 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.279247 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.279259 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.280476 4904 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="98d3e253fe7eea90aaf8e37024c7a4a43b644f3f67f008f90345ca7f2ac31494" exitCode=0 Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.280511 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"98d3e253fe7eea90aaf8e37024c7a4a43b644f3f67f008f90345ca7f2ac31494"} Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.280559 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.281367 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.281389 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.281399 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.627497 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:06:09 crc kubenswrapper[4904]: I0223 10:06:09.639360 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:06:09 crc kubenswrapper[4904]: W0223 10:06:09.829529 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Feb 23 10:06:09 crc kubenswrapper[4904]: E0223 10:06:09.829593 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Feb 23 10:06:10 crc kubenswrapper[4904]: W0223 10:06:10.159025 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Feb 23 10:06:10 crc kubenswrapper[4904]: E0223 10:06:10.159096 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.188595 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.196781 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 21:40:23.770193099 +0000 UTC Feb 23 10:06:10 crc kubenswrapper[4904]: E0223 10:06:10.200602 4904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" interval="3.2s" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.286265 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7af1841577771e95a36feec9afb61f2ecbd72e797de39801f337f7e31249d9ac"} Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.286307 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c"} Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.286316 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da"} Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.286325 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2"} Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.286337 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0"} Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.286360 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.287216 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.287236 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.287246 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.288573 4904 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb" exitCode=0 Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.288616 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb"} Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.288705 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.289539 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.289558 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.289567 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.291455 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3"} Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.291511 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.292094 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.292165 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.292220 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.295048 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.295065 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.295524 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0eed08557ce51ab7894bcee6fd84e7502ec700f821c9ef928da8bc6c72277d54"} Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.295548 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d73250754e428fccf72a9ad7ccba12d7c6f9fd91c3813dcf331f72fa4b9268c2"} Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.295557 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"df16a8fb9b11c51430565d4edccdc4b5fa0084cc56dceda66e205d5a52191ab1"} Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.295940 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.296015 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.296087 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.296650 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.296690 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.296704 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:10 crc kubenswrapper[4904]: W0223 10:06:10.419580 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Feb 23 10:06:10 crc kubenswrapper[4904]: E0223 10:06:10.419645 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.439723 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.440962 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.441017 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.441035 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:10 crc kubenswrapper[4904]: I0223 10:06:10.441142 4904 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 10:06:10 crc kubenswrapper[4904]: E0223 10:06:10.441894 4904 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.138:6443: connect: connection refused" node="crc" Feb 23 10:06:10 crc kubenswrapper[4904]: W0223 10:06:10.631451 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Feb 23 10:06:10 crc kubenswrapper[4904]: E0223 10:06:10.631537 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.197198 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 04:28:40.620733065 +0000 UTC Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.247423 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.300242 4904 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a" exitCode=0 Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.300314 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a"} Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.300373 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.300437 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.300498 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.300538 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.300441 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.300380 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.300927 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.300949 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.301635 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.301681 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.301698 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.301743 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.301786 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.301795 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.301638 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.301836 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.301843 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.301956 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.301982 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.301997 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.302079 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.302095 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.302110 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:11 crc kubenswrapper[4904]: I0223 10:06:11.930909 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 10:06:12 crc kubenswrapper[4904]: I0223 10:06:12.176685 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:06:12 crc kubenswrapper[4904]: I0223 10:06:12.198323 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 02:33:30.278098508 +0000 UTC Feb 23 10:06:12 crc kubenswrapper[4904]: I0223 10:06:12.306136 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 10:06:12 crc kubenswrapper[4904]: I0223 10:06:12.306158 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 10:06:12 crc kubenswrapper[4904]: I0223 10:06:12.306178 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:12 crc kubenswrapper[4904]: I0223 10:06:12.306209 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:12 crc kubenswrapper[4904]: I0223 10:06:12.306236 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:12 crc kubenswrapper[4904]: I0223 10:06:12.306156 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0"} Feb 23 10:06:12 crc kubenswrapper[4904]: I0223 10:06:12.306688 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6"} Feb 23 10:06:12 crc kubenswrapper[4904]: I0223 10:06:12.306751 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad"} Feb 23 10:06:12 crc kubenswrapper[4904]: I0223 10:06:12.306778 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833"} Feb 23 10:06:12 crc kubenswrapper[4904]: I0223 10:06:12.307553 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:12 crc kubenswrapper[4904]: I0223 10:06:12.307576 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:12 crc kubenswrapper[4904]: I0223 10:06:12.307614 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:12 crc kubenswrapper[4904]: I0223 10:06:12.307632 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:12 crc kubenswrapper[4904]: I0223 10:06:12.307584 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:12 crc kubenswrapper[4904]: I0223 10:06:12.307660 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:12 crc kubenswrapper[4904]: I0223 10:06:12.307669 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:12 crc kubenswrapper[4904]: I0223 10:06:12.307674 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:12 crc kubenswrapper[4904]: I0223 10:06:12.307637 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:13 crc kubenswrapper[4904]: I0223 10:06:13.199049 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 10:13:13.605332859 +0000 UTC Feb 23 10:06:13 crc kubenswrapper[4904]: I0223 10:06:13.314783 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee"} Feb 23 10:06:13 crc kubenswrapper[4904]: I0223 10:06:13.314909 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:13 crc kubenswrapper[4904]: I0223 10:06:13.315814 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:13 crc kubenswrapper[4904]: I0223 10:06:13.315845 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:13 crc kubenswrapper[4904]: I0223 10:06:13.315857 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:13 crc kubenswrapper[4904]: I0223 10:06:13.333384 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:06:13 crc kubenswrapper[4904]: I0223 10:06:13.333474 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 10:06:13 crc kubenswrapper[4904]: I0223 10:06:13.333499 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:13 crc kubenswrapper[4904]: I0223 10:06:13.334664 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:13 crc kubenswrapper[4904]: I0223 10:06:13.334752 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:13 crc kubenswrapper[4904]: I0223 10:06:13.334794 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:13 crc kubenswrapper[4904]: I0223 10:06:13.421373 4904 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 10:06:13 crc kubenswrapper[4904]: I0223 10:06:13.642693 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:13 crc kubenswrapper[4904]: I0223 10:06:13.644335 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:13 crc kubenswrapper[4904]: I0223 10:06:13.644395 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:13 crc kubenswrapper[4904]: I0223 10:06:13.644419 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:13 crc kubenswrapper[4904]: I0223 10:06:13.644459 4904 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 10:06:14 crc kubenswrapper[4904]: I0223 10:06:14.066431 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:06:14 crc kubenswrapper[4904]: I0223 10:06:14.066618 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:14 crc kubenswrapper[4904]: I0223 10:06:14.068094 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:14 crc kubenswrapper[4904]: I0223 10:06:14.068138 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:14 crc kubenswrapper[4904]: I0223 10:06:14.068150 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:14 crc kubenswrapper[4904]: I0223 10:06:14.199897 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 18:30:01.770325421 +0000 UTC Feb 23 10:06:14 crc kubenswrapper[4904]: I0223 10:06:14.320495 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:14 crc kubenswrapper[4904]: I0223 10:06:14.321918 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:14 crc kubenswrapper[4904]: I0223 10:06:14.321982 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:14 crc kubenswrapper[4904]: I0223 10:06:14.321999 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:14 crc kubenswrapper[4904]: I0223 10:06:14.793822 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:06:14 crc kubenswrapper[4904]: I0223 10:06:14.794003 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:14 crc kubenswrapper[4904]: I0223 10:06:14.795465 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:14 crc kubenswrapper[4904]: I0223 10:06:14.795510 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:14 crc kubenswrapper[4904]: I0223 10:06:14.795529 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:15 crc kubenswrapper[4904]: I0223 10:06:15.200737 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 16:09:36.063295063 +0000 UTC Feb 23 10:06:16 crc kubenswrapper[4904]: I0223 10:06:16.201243 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 14:32:22.150231833 +0000 UTC Feb 23 10:06:17 crc kubenswrapper[4904]: I0223 10:06:17.201375 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 12:56:07.769999463 +0000 UTC Feb 23 10:06:17 crc kubenswrapper[4904]: I0223 10:06:17.281214 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 23 10:06:17 crc kubenswrapper[4904]: I0223 10:06:17.281487 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:17 crc kubenswrapper[4904]: I0223 10:06:17.283155 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:17 crc kubenswrapper[4904]: I0223 10:06:17.283214 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:17 crc kubenswrapper[4904]: I0223 10:06:17.283225 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:17 crc kubenswrapper[4904]: E0223 10:06:17.337266 4904 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 10:06:17 crc kubenswrapper[4904]: I0223 10:06:17.720217 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:06:17 crc kubenswrapper[4904]: I0223 10:06:17.720921 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:17 crc kubenswrapper[4904]: I0223 10:06:17.722856 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:17 crc kubenswrapper[4904]: I0223 10:06:17.722957 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:17 crc kubenswrapper[4904]: I0223 10:06:17.722978 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:18 crc kubenswrapper[4904]: I0223 10:06:18.201942 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 23:31:28.362779548 +0000 UTC Feb 23 10:06:19 crc kubenswrapper[4904]: I0223 10:06:19.202379 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 23:43:11.730650193 +0000 UTC Feb 23 10:06:20 crc kubenswrapper[4904]: I0223 10:06:20.202830 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 17:35:12.91717198 +0000 UTC Feb 23 10:06:20 crc kubenswrapper[4904]: I0223 10:06:20.683538 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 23 10:06:20 crc kubenswrapper[4904]: I0223 10:06:20.683685 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:20 crc kubenswrapper[4904]: I0223 10:06:20.684770 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:20 crc kubenswrapper[4904]: I0223 10:06:20.684809 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:20 crc kubenswrapper[4904]: I0223 10:06:20.684819 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:20 crc kubenswrapper[4904]: I0223 10:06:20.720674 4904 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 10:06:20 crc kubenswrapper[4904]: I0223 10:06:20.720771 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 10:06:20 crc kubenswrapper[4904]: E0223 10:06:20.993612 4904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:20Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 23 10:06:20 crc kubenswrapper[4904]: I0223 10:06:20.995894 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:20Z is after 2026-02-23T05:33:13Z Feb 23 10:06:20 crc kubenswrapper[4904]: E0223 10:06:20.996119 4904 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:20Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 10:06:21 crc kubenswrapper[4904]: E0223 10:06:21.000275 4904 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:21Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 10:06:21 crc kubenswrapper[4904]: I0223 10:06:21.002316 4904 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 23 10:06:21 crc kubenswrapper[4904]: I0223 10:06:21.002370 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 23 10:06:21 crc kubenswrapper[4904]: E0223 10:06:21.005778 4904 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:21Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1896d82768954afa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 10:06:07.186807546 +0000 UTC m=+0.607181069,LastTimestamp:2026-02-23 10:06:07.186807546 +0000 UTC m=+0.607181069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 10:06:21 crc kubenswrapper[4904]: W0223 10:06:21.006216 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:21Z is after 2026-02-23T05:33:13Z Feb 23 10:06:21 crc kubenswrapper[4904]: E0223 10:06:21.006285 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:21Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 10:06:21 crc kubenswrapper[4904]: W0223 10:06:21.007250 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:21Z is after 2026-02-23T05:33:13Z Feb 23 10:06:21 crc kubenswrapper[4904]: E0223 10:06:21.007313 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:21Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 10:06:21 crc kubenswrapper[4904]: I0223 10:06:21.009698 4904 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 23 10:06:21 crc kubenswrapper[4904]: I0223 10:06:21.009804 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 23 10:06:21 crc kubenswrapper[4904]: W0223 10:06:21.010421 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:21Z is after 2026-02-23T05:33:13Z Feb 23 10:06:21 crc kubenswrapper[4904]: E0223 10:06:21.010491 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:21Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 10:06:21 crc kubenswrapper[4904]: W0223 10:06:21.014230 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:21Z is after 2026-02-23T05:33:13Z Feb 23 10:06:21 crc kubenswrapper[4904]: E0223 10:06:21.014302 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:21Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 10:06:21 crc kubenswrapper[4904]: I0223 10:06:21.190221 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:21Z is after 2026-02-23T05:33:13Z Feb 23 10:06:21 crc kubenswrapper[4904]: I0223 10:06:21.203517 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 20:07:57.747518369 +0000 UTC Feb 23 10:06:21 crc kubenswrapper[4904]: I0223 10:06:21.339995 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 23 10:06:21 crc kubenswrapper[4904]: I0223 10:06:21.342057 4904 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7af1841577771e95a36feec9afb61f2ecbd72e797de39801f337f7e31249d9ac" exitCode=255 Feb 23 10:06:21 crc kubenswrapper[4904]: I0223 10:06:21.342110 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7af1841577771e95a36feec9afb61f2ecbd72e797de39801f337f7e31249d9ac"} Feb 23 10:06:21 crc kubenswrapper[4904]: I0223 10:06:21.342255 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:21 crc kubenswrapper[4904]: I0223 10:06:21.343101 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:21 crc kubenswrapper[4904]: I0223 10:06:21.343122 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:21 crc kubenswrapper[4904]: I0223 10:06:21.343131 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:21 crc kubenswrapper[4904]: I0223 10:06:21.343542 4904 scope.go:117] "RemoveContainer" containerID="7af1841577771e95a36feec9afb61f2ecbd72e797de39801f337f7e31249d9ac" Feb 23 10:06:22 crc kubenswrapper[4904]: I0223 10:06:22.190417 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:22Z is after 2026-02-23T05:33:13Z Feb 23 10:06:22 crc kubenswrapper[4904]: I0223 10:06:22.203651 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:42:11.085493961 +0000 UTC Feb 23 10:06:22 crc kubenswrapper[4904]: I0223 10:06:22.345903 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 23 10:06:22 crc kubenswrapper[4904]: I0223 10:06:22.346347 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 23 10:06:22 crc kubenswrapper[4904]: I0223 10:06:22.347622 4904 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4d5183ae2ea5f6f81cb8d2a8a3826137e02a3600ff8b546c7dbace759e3c497c" exitCode=255 Feb 23 10:06:22 crc kubenswrapper[4904]: I0223 10:06:22.347653 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4d5183ae2ea5f6f81cb8d2a8a3826137e02a3600ff8b546c7dbace759e3c497c"} Feb 23 10:06:22 crc kubenswrapper[4904]: I0223 10:06:22.347694 4904 scope.go:117] "RemoveContainer" containerID="7af1841577771e95a36feec9afb61f2ecbd72e797de39801f337f7e31249d9ac" Feb 23 10:06:22 crc kubenswrapper[4904]: I0223 10:06:22.347875 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:22 crc kubenswrapper[4904]: I0223 10:06:22.348736 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:22 crc kubenswrapper[4904]: I0223 10:06:22.348770 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:22 crc kubenswrapper[4904]: I0223 10:06:22.348782 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:22 crc kubenswrapper[4904]: I0223 10:06:22.349276 4904 scope.go:117] "RemoveContainer" containerID="4d5183ae2ea5f6f81cb8d2a8a3826137e02a3600ff8b546c7dbace759e3c497c" Feb 23 10:06:22 crc kubenswrapper[4904]: E0223 10:06:22.349434 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 10:06:23 crc kubenswrapper[4904]: I0223 10:06:23.195454 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:23Z is after 2026-02-23T05:33:13Z Feb 23 10:06:23 crc kubenswrapper[4904]: I0223 10:06:23.204121 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 04:05:24.364195533 +0000 UTC Feb 23 10:06:23 crc kubenswrapper[4904]: I0223 10:06:23.339827 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:06:23 crc kubenswrapper[4904]: I0223 10:06:23.353861 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 23 10:06:23 crc kubenswrapper[4904]: I0223 10:06:23.357742 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:23 crc kubenswrapper[4904]: I0223 10:06:23.359269 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:23 crc kubenswrapper[4904]: I0223 10:06:23.359320 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:23 crc kubenswrapper[4904]: I0223 10:06:23.359333 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:23 crc kubenswrapper[4904]: I0223 10:06:23.360004 4904 scope.go:117] "RemoveContainer" containerID="4d5183ae2ea5f6f81cb8d2a8a3826137e02a3600ff8b546c7dbace759e3c497c" Feb 23 10:06:23 crc kubenswrapper[4904]: E0223 10:06:23.360210 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 10:06:23 crc kubenswrapper[4904]: I0223 10:06:23.363343 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:06:24 crc kubenswrapper[4904]: I0223 10:06:24.071470 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:06:24 crc kubenswrapper[4904]: I0223 10:06:24.071655 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:24 crc kubenswrapper[4904]: I0223 10:06:24.072866 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:24 crc kubenswrapper[4904]: I0223 10:06:24.072911 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:24 crc kubenswrapper[4904]: I0223 10:06:24.072927 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:24 crc kubenswrapper[4904]: I0223 10:06:24.191399 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:24Z is after 2026-02-23T05:33:13Z Feb 23 10:06:24 crc kubenswrapper[4904]: I0223 10:06:24.204634 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 04:44:08.721867893 +0000 UTC Feb 23 10:06:24 crc kubenswrapper[4904]: I0223 10:06:24.360592 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:24 crc kubenswrapper[4904]: I0223 10:06:24.362090 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:24 crc kubenswrapper[4904]: I0223 10:06:24.362130 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:24 crc kubenswrapper[4904]: I0223 10:06:24.362142 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:24 crc kubenswrapper[4904]: I0223 10:06:24.362920 4904 scope.go:117] "RemoveContainer" containerID="4d5183ae2ea5f6f81cb8d2a8a3826137e02a3600ff8b546c7dbace759e3c497c" Feb 23 10:06:24 crc kubenswrapper[4904]: E0223 10:06:24.363106 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 10:06:24 crc kubenswrapper[4904]: I0223 10:06:24.794209 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:06:25 crc kubenswrapper[4904]: I0223 10:06:25.190947 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:25Z is after 2026-02-23T05:33:13Z Feb 23 10:06:25 crc kubenswrapper[4904]: I0223 10:06:25.205623 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 06:55:56.94087907 +0000 UTC Feb 23 10:06:25 crc kubenswrapper[4904]: I0223 10:06:25.336797 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:06:25 crc kubenswrapper[4904]: I0223 10:06:25.363804 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:25 crc kubenswrapper[4904]: I0223 10:06:25.365163 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:25 crc kubenswrapper[4904]: I0223 10:06:25.365249 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:25 crc kubenswrapper[4904]: I0223 10:06:25.365272 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:25 crc kubenswrapper[4904]: I0223 10:06:25.366160 4904 scope.go:117] "RemoveContainer" containerID="4d5183ae2ea5f6f81cb8d2a8a3826137e02a3600ff8b546c7dbace759e3c497c" Feb 23 10:06:25 crc kubenswrapper[4904]: E0223 10:06:25.366433 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 10:06:26 crc kubenswrapper[4904]: I0223 10:06:26.193352 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:26Z is after 2026-02-23T05:33:13Z Feb 23 10:06:26 crc kubenswrapper[4904]: I0223 10:06:26.205989 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 22:35:36.073906803 +0000 UTC Feb 23 10:06:26 crc kubenswrapper[4904]: I0223 10:06:26.366688 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:26 crc kubenswrapper[4904]: I0223 10:06:26.368112 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:26 crc kubenswrapper[4904]: I0223 10:06:26.368165 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:26 crc kubenswrapper[4904]: I0223 10:06:26.368183 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:26 crc kubenswrapper[4904]: I0223 10:06:26.369054 4904 scope.go:117] "RemoveContainer" containerID="4d5183ae2ea5f6f81cb8d2a8a3826137e02a3600ff8b546c7dbace759e3c497c" Feb 23 10:06:26 crc kubenswrapper[4904]: E0223 10:06:26.369324 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 10:06:27 crc kubenswrapper[4904]: I0223 10:06:27.191471 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:27Z is after 2026-02-23T05:33:13Z Feb 23 10:06:27 crc kubenswrapper[4904]: I0223 10:06:27.206845 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 21:44:24.365795059 +0000 UTC Feb 23 10:06:27 crc kubenswrapper[4904]: E0223 10:06:27.337474 4904 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 10:06:27 crc kubenswrapper[4904]: E0223 10:06:27.398538 4904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:27Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 23 10:06:27 crc kubenswrapper[4904]: I0223 10:06:27.400730 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:27 crc kubenswrapper[4904]: I0223 10:06:27.403082 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:27 crc kubenswrapper[4904]: I0223 10:06:27.403153 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:27 crc kubenswrapper[4904]: I0223 10:06:27.403173 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:27 crc kubenswrapper[4904]: I0223 10:06:27.403217 4904 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 10:06:27 crc kubenswrapper[4904]: E0223 10:06:27.406507 4904 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:27Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 10:06:28 crc kubenswrapper[4904]: I0223 10:06:28.191226 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:28Z is after 2026-02-23T05:33:13Z Feb 23 10:06:28 crc kubenswrapper[4904]: I0223 10:06:28.207631 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 21:17:15.746157607 +0000 UTC Feb 23 10:06:29 crc kubenswrapper[4904]: I0223 10:06:29.192793 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:29Z is after 2026-02-23T05:33:13Z Feb 23 10:06:29 crc kubenswrapper[4904]: I0223 10:06:29.208274 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 00:44:33.543289741 +0000 UTC Feb 23 10:06:29 crc kubenswrapper[4904]: I0223 10:06:29.463541 4904 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 10:06:29 crc kubenswrapper[4904]: E0223 10:06:29.472742 4904 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:29Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 10:06:29 crc kubenswrapper[4904]: W0223 10:06:29.593413 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:29Z is after 2026-02-23T05:33:13Z Feb 23 10:06:29 crc kubenswrapper[4904]: E0223 10:06:29.593510 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:29Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 10:06:30 crc kubenswrapper[4904]: I0223 10:06:30.193407 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:30Z is after 2026-02-23T05:33:13Z Feb 23 10:06:30 crc kubenswrapper[4904]: I0223 10:06:30.208782 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 03:08:59.186965709 +0000 UTC Feb 23 10:06:30 crc kubenswrapper[4904]: W0223 10:06:30.464601 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:30Z is after 2026-02-23T05:33:13Z Feb 23 10:06:30 crc kubenswrapper[4904]: E0223 10:06:30.464830 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:30Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 10:06:30 crc kubenswrapper[4904]: W0223 10:06:30.488432 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:30Z is after 2026-02-23T05:33:13Z Feb 23 10:06:30 crc kubenswrapper[4904]: E0223 10:06:30.488532 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:30Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 10:06:30 crc kubenswrapper[4904]: I0223 10:06:30.713427 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 23 10:06:30 crc kubenswrapper[4904]: I0223 10:06:30.713579 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:30 crc kubenswrapper[4904]: I0223 10:06:30.714860 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:30 crc kubenswrapper[4904]: I0223 10:06:30.714894 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:30 crc kubenswrapper[4904]: I0223 10:06:30.714906 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:30 crc kubenswrapper[4904]: I0223 10:06:30.721381 4904 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 10:06:30 crc kubenswrapper[4904]: I0223 10:06:30.721440 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 10:06:30 crc kubenswrapper[4904]: I0223 10:06:30.728248 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 23 10:06:31 crc kubenswrapper[4904]: E0223 10:06:31.008810 4904 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:31Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1896d82768954afa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 10:06:07.186807546 +0000 UTC m=+0.607181069,LastTimestamp:2026-02-23 10:06:07.186807546 +0000 UTC m=+0.607181069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 10:06:31 crc kubenswrapper[4904]: W0223 10:06:31.109416 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:31Z is after 2026-02-23T05:33:13Z Feb 23 10:06:31 crc kubenswrapper[4904]: E0223 10:06:31.109502 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:31Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 10:06:31 crc kubenswrapper[4904]: I0223 10:06:31.191294 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:31Z is after 2026-02-23T05:33:13Z Feb 23 10:06:31 crc kubenswrapper[4904]: I0223 10:06:31.209554 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 05:09:05.833738653 +0000 UTC Feb 23 10:06:31 crc kubenswrapper[4904]: I0223 10:06:31.379164 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:31 crc kubenswrapper[4904]: I0223 10:06:31.379904 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:31 crc kubenswrapper[4904]: I0223 10:06:31.379931 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:31 crc kubenswrapper[4904]: I0223 10:06:31.379940 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:32 crc kubenswrapper[4904]: I0223 10:06:32.190628 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:32Z is after 2026-02-23T05:33:13Z Feb 23 10:06:32 crc kubenswrapper[4904]: I0223 10:06:32.209926 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 22:14:11.099801936 +0000 UTC Feb 23 10:06:33 crc kubenswrapper[4904]: I0223 10:06:33.189683 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:33Z is after 2026-02-23T05:33:13Z Feb 23 10:06:33 crc kubenswrapper[4904]: I0223 10:06:33.211023 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 22:02:13.172283499 +0000 UTC Feb 23 10:06:34 crc kubenswrapper[4904]: I0223 10:06:34.193796 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:34Z is after 2026-02-23T05:33:13Z Feb 23 10:06:34 crc kubenswrapper[4904]: I0223 10:06:34.211270 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 13:18:48.493730866 +0000 UTC Feb 23 10:06:34 crc kubenswrapper[4904]: E0223 10:06:34.402407 4904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:34Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 23 10:06:34 crc kubenswrapper[4904]: I0223 10:06:34.406764 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:34 crc kubenswrapper[4904]: I0223 10:06:34.408049 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:34 crc kubenswrapper[4904]: I0223 10:06:34.408080 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:34 crc kubenswrapper[4904]: I0223 10:06:34.408089 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:34 crc kubenswrapper[4904]: I0223 10:06:34.408120 4904 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 10:06:34 crc kubenswrapper[4904]: E0223 10:06:34.410799 4904 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:34Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 10:06:35 crc kubenswrapper[4904]: I0223 10:06:35.192798 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:35Z is after 2026-02-23T05:33:13Z Feb 23 10:06:35 crc kubenswrapper[4904]: I0223 10:06:35.212214 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 09:39:19.88414209 +0000 UTC Feb 23 10:06:36 crc kubenswrapper[4904]: I0223 10:06:36.191155 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:36Z is after 2026-02-23T05:33:13Z Feb 23 10:06:36 crc kubenswrapper[4904]: I0223 10:06:36.213253 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 02:38:01.388694734 +0000 UTC Feb 23 10:06:37 crc kubenswrapper[4904]: I0223 10:06:37.192957 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:37Z is after 2026-02-23T05:33:13Z Feb 23 10:06:37 crc kubenswrapper[4904]: I0223 10:06:37.213404 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 21:17:57.977300316 +0000 UTC Feb 23 10:06:37 crc kubenswrapper[4904]: I0223 10:06:37.254978 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:37 crc kubenswrapper[4904]: I0223 10:06:37.256751 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:37 crc kubenswrapper[4904]: I0223 10:06:37.256907 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:37 crc kubenswrapper[4904]: I0223 10:06:37.257000 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:37 crc kubenswrapper[4904]: I0223 10:06:37.257836 4904 scope.go:117] "RemoveContainer" containerID="4d5183ae2ea5f6f81cb8d2a8a3826137e02a3600ff8b546c7dbace759e3c497c" Feb 23 10:06:37 crc kubenswrapper[4904]: E0223 10:06:37.337700 4904 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 10:06:38 crc kubenswrapper[4904]: I0223 10:06:38.191976 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:38Z is after 2026-02-23T05:33:13Z Feb 23 10:06:38 crc kubenswrapper[4904]: I0223 10:06:38.214370 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 17:35:51.020764124 +0000 UTC Feb 23 10:06:38 crc kubenswrapper[4904]: I0223 10:06:38.396516 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 23 10:06:38 crc kubenswrapper[4904]: I0223 10:06:38.397143 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 23 10:06:38 crc kubenswrapper[4904]: I0223 10:06:38.399792 4904 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="43e196d048f8aa6ce9f48ac05407bea7db8b3cf4b62ea8d6e933c524b7532970" exitCode=255 Feb 23 10:06:38 crc kubenswrapper[4904]: I0223 10:06:38.399838 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"43e196d048f8aa6ce9f48ac05407bea7db8b3cf4b62ea8d6e933c524b7532970"} Feb 23 10:06:38 crc kubenswrapper[4904]: I0223 10:06:38.399885 4904 scope.go:117] "RemoveContainer" containerID="4d5183ae2ea5f6f81cb8d2a8a3826137e02a3600ff8b546c7dbace759e3c497c" Feb 23 10:06:38 crc kubenswrapper[4904]: I0223 10:06:38.399994 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:38 crc kubenswrapper[4904]: I0223 10:06:38.401309 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:38 crc kubenswrapper[4904]: I0223 10:06:38.401343 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:38 crc kubenswrapper[4904]: I0223 10:06:38.401355 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:38 crc kubenswrapper[4904]: I0223 10:06:38.402105 4904 scope.go:117] "RemoveContainer" containerID="43e196d048f8aa6ce9f48ac05407bea7db8b3cf4b62ea8d6e933c524b7532970" Feb 23 10:06:38 crc kubenswrapper[4904]: E0223 10:06:38.402328 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 10:06:39 crc kubenswrapper[4904]: I0223 10:06:39.193701 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:39Z is after 2026-02-23T05:33:13Z Feb 23 10:06:39 crc kubenswrapper[4904]: I0223 10:06:39.215284 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 11:54:17.06157887 +0000 UTC Feb 23 10:06:39 crc kubenswrapper[4904]: I0223 10:06:39.406676 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 23 10:06:39 crc kubenswrapper[4904]: I0223 10:06:39.685900 4904 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:42494->192.168.126.11:10357: read: connection reset by peer" start-of-body= Feb 23 10:06:39 crc kubenswrapper[4904]: I0223 10:06:39.686027 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:42494->192.168.126.11:10357: read: connection reset by peer" Feb 23 10:06:39 crc kubenswrapper[4904]: I0223 10:06:39.686100 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:06:39 crc kubenswrapper[4904]: I0223 10:06:39.686316 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:39 crc kubenswrapper[4904]: I0223 10:06:39.688243 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:39 crc kubenswrapper[4904]: I0223 10:06:39.688322 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:39 crc kubenswrapper[4904]: I0223 10:06:39.688358 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:39 crc kubenswrapper[4904]: I0223 10:06:39.689128 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"de5ee9acfaefb66a9c397abfa324c77757c7c2618457b7c1a91b43f705633418"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 23 10:06:39 crc kubenswrapper[4904]: I0223 10:06:39.689386 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://de5ee9acfaefb66a9c397abfa324c77757c7c2618457b7c1a91b43f705633418" gracePeriod=30 Feb 23 10:06:40 crc kubenswrapper[4904]: I0223 10:06:40.190607 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:40Z is after 2026-02-23T05:33:13Z Feb 23 10:06:40 crc kubenswrapper[4904]: I0223 10:06:40.215413 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 11:39:59.556887739 +0000 UTC Feb 23 10:06:40 crc kubenswrapper[4904]: I0223 10:06:40.416502 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 23 10:06:40 crc kubenswrapper[4904]: I0223 10:06:40.417142 4904 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="de5ee9acfaefb66a9c397abfa324c77757c7c2618457b7c1a91b43f705633418" exitCode=255 Feb 23 10:06:40 crc kubenswrapper[4904]: I0223 10:06:40.417198 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"de5ee9acfaefb66a9c397abfa324c77757c7c2618457b7c1a91b43f705633418"} Feb 23 10:06:40 crc kubenswrapper[4904]: I0223 10:06:40.417235 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"281e0ebee674b5fc03aceb14229de80bf33c4bb1ca5474f6f9e42c8f30155154"} Feb 23 10:06:40 crc kubenswrapper[4904]: I0223 10:06:40.417364 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:40 crc kubenswrapper[4904]: I0223 10:06:40.418540 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:40 crc kubenswrapper[4904]: I0223 10:06:40.418592 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:40 crc kubenswrapper[4904]: I0223 10:06:40.418609 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:41 crc kubenswrapper[4904]: E0223 10:06:41.011865 4904 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:41Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1896d82768954afa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 10:06:07.186807546 +0000 UTC m=+0.607181069,LastTimestamp:2026-02-23 10:06:07.186807546 +0000 UTC m=+0.607181069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 10:06:41 crc kubenswrapper[4904]: I0223 10:06:41.191098 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:41Z is after 2026-02-23T05:33:13Z Feb 23 10:06:41 crc kubenswrapper[4904]: I0223 10:06:41.216543 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 20:23:44.226102175 +0000 UTC Feb 23 10:06:41 crc kubenswrapper[4904]: I0223 10:06:41.247582 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:06:41 crc kubenswrapper[4904]: E0223 10:06:41.406418 4904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:41Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 23 10:06:41 crc kubenswrapper[4904]: I0223 10:06:41.411502 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:41 crc kubenswrapper[4904]: I0223 10:06:41.412792 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:41 crc kubenswrapper[4904]: I0223 10:06:41.412842 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:41 crc kubenswrapper[4904]: I0223 10:06:41.412860 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:41 crc kubenswrapper[4904]: I0223 10:06:41.412895 4904 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 10:06:41 crc kubenswrapper[4904]: E0223 10:06:41.418053 4904 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:41Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 10:06:41 crc kubenswrapper[4904]: I0223 10:06:41.419268 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:41 crc kubenswrapper[4904]: I0223 10:06:41.420408 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:41 crc kubenswrapper[4904]: I0223 10:06:41.420455 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:41 crc kubenswrapper[4904]: I0223 10:06:41.420474 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:42 crc kubenswrapper[4904]: I0223 10:06:42.192422 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:42Z is after 2026-02-23T05:33:13Z Feb 23 10:06:42 crc kubenswrapper[4904]: I0223 10:06:42.216694 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 01:46:37.944231845 +0000 UTC Feb 23 10:06:42 crc kubenswrapper[4904]: W0223 10:06:42.583881 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:42Z is after 2026-02-23T05:33:13Z Feb 23 10:06:42 crc kubenswrapper[4904]: E0223 10:06:42.583986 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:42Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 10:06:43 crc kubenswrapper[4904]: I0223 10:06:43.191555 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:43Z is after 2026-02-23T05:33:13Z Feb 23 10:06:43 crc kubenswrapper[4904]: I0223 10:06:43.217188 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 04:20:00.135938118 +0000 UTC Feb 23 10:06:44 crc kubenswrapper[4904]: I0223 10:06:44.191686 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:44Z is after 2026-02-23T05:33:13Z Feb 23 10:06:44 crc kubenswrapper[4904]: I0223 10:06:44.218182 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 13:32:30.691859822 +0000 UTC Feb 23 10:06:44 crc kubenswrapper[4904]: I0223 10:06:44.793956 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:06:44 crc kubenswrapper[4904]: I0223 10:06:44.794159 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:44 crc kubenswrapper[4904]: I0223 10:06:44.795437 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:44 crc kubenswrapper[4904]: I0223 10:06:44.795468 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:44 crc kubenswrapper[4904]: I0223 10:06:44.795480 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:44 crc kubenswrapper[4904]: I0223 10:06:44.796132 4904 scope.go:117] "RemoveContainer" containerID="43e196d048f8aa6ce9f48ac05407bea7db8b3cf4b62ea8d6e933c524b7532970" Feb 23 10:06:44 crc kubenswrapper[4904]: E0223 10:06:44.796330 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 10:06:45 crc kubenswrapper[4904]: I0223 10:06:45.192929 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:45Z is after 2026-02-23T05:33:13Z Feb 23 10:06:45 crc kubenswrapper[4904]: I0223 10:06:45.218334 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 00:13:23.577896484 +0000 UTC Feb 23 10:06:45 crc kubenswrapper[4904]: I0223 10:06:45.336212 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:06:45 crc kubenswrapper[4904]: I0223 10:06:45.429900 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:45 crc kubenswrapper[4904]: I0223 10:06:45.431218 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:45 crc kubenswrapper[4904]: I0223 10:06:45.431299 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:45 crc kubenswrapper[4904]: I0223 10:06:45.431327 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:45 crc kubenswrapper[4904]: I0223 10:06:45.432398 4904 scope.go:117] "RemoveContainer" containerID="43e196d048f8aa6ce9f48ac05407bea7db8b3cf4b62ea8d6e933c524b7532970" Feb 23 10:06:45 crc kubenswrapper[4904]: E0223 10:06:45.432806 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 10:06:45 crc kubenswrapper[4904]: I0223 10:06:45.994341 4904 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 10:06:45 crc kubenswrapper[4904]: E0223 10:06:45.999587 4904 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:45Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 10:06:46 crc kubenswrapper[4904]: E0223 10:06:46.000923 4904 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Feb 23 10:06:46 crc kubenswrapper[4904]: I0223 10:06:46.190680 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:46Z is after 2026-02-23T05:33:13Z Feb 23 10:06:46 crc kubenswrapper[4904]: I0223 10:06:46.219249 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 08:18:32.582499977 +0000 UTC Feb 23 10:06:47 crc kubenswrapper[4904]: I0223 10:06:47.190461 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:47Z is after 2026-02-23T05:33:13Z Feb 23 10:06:47 crc kubenswrapper[4904]: I0223 10:06:47.219484 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 05:07:48.76922596 +0000 UTC Feb 23 10:06:47 crc kubenswrapper[4904]: E0223 10:06:47.337795 4904 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 10:06:47 crc kubenswrapper[4904]: I0223 10:06:47.721106 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:06:47 crc kubenswrapper[4904]: I0223 10:06:47.721243 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:47 crc kubenswrapper[4904]: I0223 10:06:47.722259 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:47 crc kubenswrapper[4904]: I0223 10:06:47.722299 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:47 crc kubenswrapper[4904]: I0223 10:06:47.722315 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:48 crc kubenswrapper[4904]: I0223 10:06:48.192986 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:48Z is after 2026-02-23T05:33:13Z Feb 23 10:06:48 crc kubenswrapper[4904]: I0223 10:06:48.220389 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 11:21:15.852750941 +0000 UTC Feb 23 10:06:48 crc kubenswrapper[4904]: E0223 10:06:48.412701 4904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:48Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 23 10:06:48 crc kubenswrapper[4904]: I0223 10:06:48.418948 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:48 crc kubenswrapper[4904]: I0223 10:06:48.421787 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:48 crc kubenswrapper[4904]: I0223 10:06:48.421868 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:48 crc kubenswrapper[4904]: I0223 10:06:48.421896 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:48 crc kubenswrapper[4904]: I0223 10:06:48.421943 4904 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 10:06:48 crc kubenswrapper[4904]: E0223 10:06:48.428324 4904 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:48Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 10:06:49 crc kubenswrapper[4904]: I0223 10:06:49.191226 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:49Z is after 2026-02-23T05:33:13Z Feb 23 10:06:49 crc kubenswrapper[4904]: I0223 10:06:49.221018 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 18:54:35.255724099 +0000 UTC Feb 23 10:06:50 crc kubenswrapper[4904]: I0223 10:06:50.190946 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:50Z is after 2026-02-23T05:33:13Z Feb 23 10:06:50 crc kubenswrapper[4904]: I0223 10:06:50.221648 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 23:27:08.500743141 +0000 UTC Feb 23 10:06:50 crc kubenswrapper[4904]: I0223 10:06:50.722171 4904 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 10:06:50 crc kubenswrapper[4904]: I0223 10:06:50.722484 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 10:06:51 crc kubenswrapper[4904]: E0223 10:06:51.015359 4904 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:51Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1896d82768954afa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 10:06:07.186807546 +0000 UTC m=+0.607181069,LastTimestamp:2026-02-23 10:06:07.186807546 +0000 UTC m=+0.607181069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 10:06:51 crc kubenswrapper[4904]: I0223 10:06:51.192162 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:51Z is after 2026-02-23T05:33:13Z Feb 23 10:06:51 crc kubenswrapper[4904]: I0223 10:06:51.222193 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 10:57:50.60360976 +0000 UTC Feb 23 10:06:52 crc kubenswrapper[4904]: I0223 10:06:52.193100 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:52Z is after 2026-02-23T05:33:13Z Feb 23 10:06:52 crc kubenswrapper[4904]: I0223 10:06:52.222472 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 10:51:15.495957172 +0000 UTC Feb 23 10:06:52 crc kubenswrapper[4904]: W0223 10:06:52.459529 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:52Z is after 2026-02-23T05:33:13Z Feb 23 10:06:52 crc kubenswrapper[4904]: E0223 10:06:52.459636 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:52Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 10:06:53 crc kubenswrapper[4904]: I0223 10:06:53.192970 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:53Z is after 2026-02-23T05:33:13Z Feb 23 10:06:53 crc kubenswrapper[4904]: I0223 10:06:53.222965 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 17:40:44.175967173 +0000 UTC Feb 23 10:06:53 crc kubenswrapper[4904]: W0223 10:06:53.644907 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:53Z is after 2026-02-23T05:33:13Z Feb 23 10:06:53 crc kubenswrapper[4904]: E0223 10:06:53.645016 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:53Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 10:06:54 crc kubenswrapper[4904]: I0223 10:06:54.193103 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:54Z is after 2026-02-23T05:33:13Z Feb 23 10:06:54 crc kubenswrapper[4904]: I0223 10:06:54.223837 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 03:20:42.650392255 +0000 UTC Feb 23 10:06:54 crc kubenswrapper[4904]: W0223 10:06:54.348184 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:54Z is after 2026-02-23T05:33:13Z Feb 23 10:06:54 crc kubenswrapper[4904]: E0223 10:06:54.348574 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:54Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 10:06:55 crc kubenswrapper[4904]: I0223 10:06:55.193243 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:55Z is after 2026-02-23T05:33:13Z Feb 23 10:06:55 crc kubenswrapper[4904]: I0223 10:06:55.224208 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 15:50:53.472457429 +0000 UTC Feb 23 10:06:55 crc kubenswrapper[4904]: E0223 10:06:55.416590 4904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:55Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 23 10:06:55 crc kubenswrapper[4904]: I0223 10:06:55.428612 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:06:55 crc kubenswrapper[4904]: I0223 10:06:55.431225 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:06:55 crc kubenswrapper[4904]: I0223 10:06:55.431389 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:06:55 crc kubenswrapper[4904]: I0223 10:06:55.431486 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:06:55 crc kubenswrapper[4904]: I0223 10:06:55.431593 4904 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 10:06:55 crc kubenswrapper[4904]: E0223 10:06:55.434871 4904 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:55Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 10:06:56 crc kubenswrapper[4904]: I0223 10:06:56.195456 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:56Z is after 2026-02-23T05:33:13Z Feb 23 10:06:56 crc kubenswrapper[4904]: I0223 10:06:56.225430 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 16:50:09.203103575 +0000 UTC Feb 23 10:06:57 crc kubenswrapper[4904]: I0223 10:06:57.191289 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:57Z is after 2026-02-23T05:33:13Z Feb 23 10:06:57 crc kubenswrapper[4904]: I0223 10:06:57.226248 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 14:08:11.692427773 +0000 UTC Feb 23 10:06:57 crc kubenswrapper[4904]: E0223 10:06:57.338571 4904 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 10:06:58 crc kubenswrapper[4904]: I0223 10:06:58.191471 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:58Z is after 2026-02-23T05:33:13Z Feb 23 10:06:58 crc kubenswrapper[4904]: I0223 10:06:58.226615 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 14:28:26.554122121 +0000 UTC Feb 23 10:06:59 crc kubenswrapper[4904]: I0223 10:06:59.192060 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:06:59Z is after 2026-02-23T05:33:13Z Feb 23 10:06:59 crc kubenswrapper[4904]: I0223 10:06:59.227652 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:43:01.215069378 +0000 UTC Feb 23 10:07:00 crc kubenswrapper[4904]: I0223 10:07:00.193357 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:13Z Feb 23 10:07:00 crc kubenswrapper[4904]: I0223 10:07:00.228274 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 05:19:12.448050247 +0000 UTC Feb 23 10:07:00 crc kubenswrapper[4904]: I0223 10:07:00.254904 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:07:00 crc kubenswrapper[4904]: I0223 10:07:00.257012 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:00 crc kubenswrapper[4904]: I0223 10:07:00.257105 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:00 crc kubenswrapper[4904]: I0223 10:07:00.257164 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:00 crc kubenswrapper[4904]: I0223 10:07:00.262164 4904 scope.go:117] "RemoveContainer" containerID="43e196d048f8aa6ce9f48ac05407bea7db8b3cf4b62ea8d6e933c524b7532970" Feb 23 10:07:00 crc kubenswrapper[4904]: I0223 10:07:00.721227 4904 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 10:07:00 crc kubenswrapper[4904]: I0223 10:07:00.721281 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 10:07:01 crc kubenswrapper[4904]: E0223 10:07:01.026265 4904 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:01Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1896d82768954afa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 10:06:07.186807546 +0000 UTC m=+0.607181069,LastTimestamp:2026-02-23 10:06:07.186807546 +0000 UTC m=+0.607181069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 10:07:01 crc kubenswrapper[4904]: I0223 10:07:01.190645 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:01Z is after 2026-02-23T05:33:13Z Feb 23 10:07:01 crc kubenswrapper[4904]: I0223 10:07:01.228577 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 10:17:42.195056327 +0000 UTC Feb 23 10:07:01 crc kubenswrapper[4904]: I0223 10:07:01.478161 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 23 10:07:01 crc kubenswrapper[4904]: I0223 10:07:01.479483 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 23 10:07:01 crc kubenswrapper[4904]: I0223 10:07:01.482390 4904 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f" exitCode=255 Feb 23 10:07:01 crc kubenswrapper[4904]: I0223 10:07:01.482434 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f"} Feb 23 10:07:01 crc kubenswrapper[4904]: I0223 10:07:01.482468 4904 scope.go:117] "RemoveContainer" containerID="43e196d048f8aa6ce9f48ac05407bea7db8b3cf4b62ea8d6e933c524b7532970" Feb 23 10:07:01 crc kubenswrapper[4904]: I0223 10:07:01.482594 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:07:01 crc kubenswrapper[4904]: I0223 10:07:01.484011 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:01 crc kubenswrapper[4904]: I0223 10:07:01.484085 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:01 crc kubenswrapper[4904]: I0223 10:07:01.484105 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:01 crc kubenswrapper[4904]: I0223 10:07:01.485086 4904 scope.go:117] "RemoveContainer" containerID="b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f" Feb 23 10:07:01 crc kubenswrapper[4904]: E0223 10:07:01.485464 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 10:07:01 crc kubenswrapper[4904]: I0223 10:07:01.555461 4904 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 23 10:07:01 crc kubenswrapper[4904]: I0223 10:07:01.945594 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 10:07:01 crc kubenswrapper[4904]: I0223 10:07:01.945734 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:07:01 crc kubenswrapper[4904]: I0223 10:07:01.946661 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:01 crc kubenswrapper[4904]: I0223 10:07:01.946683 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:01 crc kubenswrapper[4904]: I0223 10:07:01.946692 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.230009 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 23:22:26.111017772 +0000 UTC Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.435016 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.437031 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.437100 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.437127 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.437350 4904 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.449998 4904 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.450435 4904 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 23 10:07:02 crc kubenswrapper[4904]: E0223 10:07:02.450473 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.455639 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.455695 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.455705 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.455784 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.455798 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:02Z","lastTransitionTime":"2026-02-23T10:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:02 crc kubenswrapper[4904]: E0223 10:07:02.475699 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.486171 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.486220 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.486236 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.486260 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.486277 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:02Z","lastTransitionTime":"2026-02-23T10:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.491111 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 23 10:07:02 crc kubenswrapper[4904]: E0223 10:07:02.500486 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.508888 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.508935 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.508952 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.508977 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.508999 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:02Z","lastTransitionTime":"2026-02-23T10:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:02 crc kubenswrapper[4904]: E0223 10:07:02.521513 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.532339 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.532384 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.532395 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.532412 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:02 crc kubenswrapper[4904]: I0223 10:07:02.532425 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:02Z","lastTransitionTime":"2026-02-23T10:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:02 crc kubenswrapper[4904]: E0223 10:07:02.547606 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 10:07:02 crc kubenswrapper[4904]: E0223 10:07:02.547740 4904 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 10:07:02 crc kubenswrapper[4904]: E0223 10:07:02.547773 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:02 crc kubenswrapper[4904]: E0223 10:07:02.648185 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:02 crc kubenswrapper[4904]: E0223 10:07:02.749181 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:02 crc kubenswrapper[4904]: E0223 10:07:02.850074 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:02 crc kubenswrapper[4904]: E0223 10:07:02.950969 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:03 crc kubenswrapper[4904]: E0223 10:07:03.051422 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:03 crc kubenswrapper[4904]: E0223 10:07:03.152227 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:03 crc kubenswrapper[4904]: I0223 10:07:03.230457 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 01:56:51.693206165 +0000 UTC Feb 23 10:07:03 crc kubenswrapper[4904]: E0223 10:07:03.252666 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:03 crc kubenswrapper[4904]: E0223 10:07:03.353426 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:03 crc kubenswrapper[4904]: E0223 10:07:03.454478 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:03 crc kubenswrapper[4904]: E0223 10:07:03.555037 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:03 crc kubenswrapper[4904]: E0223 10:07:03.656161 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:03 crc kubenswrapper[4904]: E0223 10:07:03.756762 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:03 crc kubenswrapper[4904]: E0223 10:07:03.857432 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:03 crc kubenswrapper[4904]: E0223 10:07:03.958273 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:04 crc kubenswrapper[4904]: E0223 10:07:04.058787 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:04 crc kubenswrapper[4904]: E0223 10:07:04.158976 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:04 crc kubenswrapper[4904]: I0223 10:07:04.231174 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 17:22:42.854010146 +0000 UTC Feb 23 10:07:04 crc kubenswrapper[4904]: E0223 10:07:04.263026 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:04 crc kubenswrapper[4904]: E0223 10:07:04.364070 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:04 crc kubenswrapper[4904]: E0223 10:07:04.465174 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:04 crc kubenswrapper[4904]: E0223 10:07:04.565965 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:04 crc kubenswrapper[4904]: E0223 10:07:04.666644 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:04 crc kubenswrapper[4904]: E0223 10:07:04.767143 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:04 crc kubenswrapper[4904]: I0223 10:07:04.794843 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:07:04 crc kubenswrapper[4904]: I0223 10:07:04.795015 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:07:04 crc kubenswrapper[4904]: I0223 10:07:04.796582 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:04 crc kubenswrapper[4904]: I0223 10:07:04.796645 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:04 crc kubenswrapper[4904]: I0223 10:07:04.796666 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:04 crc kubenswrapper[4904]: I0223 10:07:04.797891 4904 scope.go:117] "RemoveContainer" containerID="b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f" Feb 23 10:07:04 crc kubenswrapper[4904]: E0223 10:07:04.798190 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 10:07:04 crc kubenswrapper[4904]: E0223 10:07:04.867475 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:04 crc kubenswrapper[4904]: E0223 10:07:04.968455 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:05 crc kubenswrapper[4904]: E0223 10:07:05.069345 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:05 crc kubenswrapper[4904]: E0223 10:07:05.170178 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:05 crc kubenswrapper[4904]: I0223 10:07:05.231616 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 13:50:16.239853052 +0000 UTC Feb 23 10:07:05 crc kubenswrapper[4904]: E0223 10:07:05.271220 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:05 crc kubenswrapper[4904]: I0223 10:07:05.337096 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:07:05 crc kubenswrapper[4904]: E0223 10:07:05.371496 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:05 crc kubenswrapper[4904]: E0223 10:07:05.472648 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:05 crc kubenswrapper[4904]: I0223 10:07:05.503936 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:07:05 crc kubenswrapper[4904]: I0223 10:07:05.504772 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:05 crc kubenswrapper[4904]: I0223 10:07:05.504813 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:05 crc kubenswrapper[4904]: I0223 10:07:05.504822 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:05 crc kubenswrapper[4904]: I0223 10:07:05.505337 4904 scope.go:117] "RemoveContainer" containerID="b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f" Feb 23 10:07:05 crc kubenswrapper[4904]: E0223 10:07:05.505523 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 10:07:05 crc kubenswrapper[4904]: E0223 10:07:05.573756 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:05 crc kubenswrapper[4904]: E0223 10:07:05.674854 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:05 crc kubenswrapper[4904]: E0223 10:07:05.775558 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:05 crc kubenswrapper[4904]: E0223 10:07:05.876336 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:05 crc kubenswrapper[4904]: E0223 10:07:05.977082 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:06 crc kubenswrapper[4904]: E0223 10:07:06.078116 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:06 crc kubenswrapper[4904]: E0223 10:07:06.179065 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:06 crc kubenswrapper[4904]: I0223 10:07:06.232132 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 18:38:18.039630609 +0000 UTC Feb 23 10:07:06 crc kubenswrapper[4904]: E0223 10:07:06.279590 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:06 crc kubenswrapper[4904]: E0223 10:07:06.380469 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:06 crc kubenswrapper[4904]: E0223 10:07:06.481160 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:06 crc kubenswrapper[4904]: E0223 10:07:06.581700 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:06 crc kubenswrapper[4904]: E0223 10:07:06.682821 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:06 crc kubenswrapper[4904]: E0223 10:07:06.783385 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:06 crc kubenswrapper[4904]: E0223 10:07:06.884215 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:06 crc kubenswrapper[4904]: E0223 10:07:06.984657 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:07 crc kubenswrapper[4904]: E0223 10:07:07.085472 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:07 crc kubenswrapper[4904]: E0223 10:07:07.185767 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:07 crc kubenswrapper[4904]: I0223 10:07:07.233153 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 12:45:05.250349951 +0000 UTC Feb 23 10:07:07 crc kubenswrapper[4904]: E0223 10:07:07.286204 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:07 crc kubenswrapper[4904]: E0223 10:07:07.338758 4904 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 10:07:07 crc kubenswrapper[4904]: E0223 10:07:07.386703 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:07 crc kubenswrapper[4904]: E0223 10:07:07.487035 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:07 crc kubenswrapper[4904]: E0223 10:07:07.587375 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:07 crc kubenswrapper[4904]: E0223 10:07:07.687863 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:07 crc kubenswrapper[4904]: E0223 10:07:07.788528 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:07 crc kubenswrapper[4904]: E0223 10:07:07.889290 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:07 crc kubenswrapper[4904]: E0223 10:07:07.989955 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:08 crc kubenswrapper[4904]: E0223 10:07:08.090396 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:08 crc kubenswrapper[4904]: E0223 10:07:08.190940 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:08 crc kubenswrapper[4904]: I0223 10:07:08.234317 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 13:30:42.966547218 +0000 UTC Feb 23 10:07:08 crc kubenswrapper[4904]: E0223 10:07:08.291643 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:08 crc kubenswrapper[4904]: E0223 10:07:08.392048 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:08 crc kubenswrapper[4904]: E0223 10:07:08.492782 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:08 crc kubenswrapper[4904]: E0223 10:07:08.592896 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:08 crc kubenswrapper[4904]: E0223 10:07:08.693402 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:08 crc kubenswrapper[4904]: E0223 10:07:08.794292 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:08 crc kubenswrapper[4904]: E0223 10:07:08.895058 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:08 crc kubenswrapper[4904]: E0223 10:07:08.996164 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:09 crc kubenswrapper[4904]: E0223 10:07:09.096742 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:09 crc kubenswrapper[4904]: E0223 10:07:09.197757 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:09 crc kubenswrapper[4904]: I0223 10:07:09.235503 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 17:24:00.147237468 +0000 UTC Feb 23 10:07:09 crc kubenswrapper[4904]: E0223 10:07:09.298226 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:09 crc kubenswrapper[4904]: E0223 10:07:09.399019 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:09 crc kubenswrapper[4904]: E0223 10:07:09.499600 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:09 crc kubenswrapper[4904]: E0223 10:07:09.600308 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:09 crc kubenswrapper[4904]: E0223 10:07:09.701108 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:09 crc kubenswrapper[4904]: E0223 10:07:09.801850 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:09 crc kubenswrapper[4904]: E0223 10:07:09.903131 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:10 crc kubenswrapper[4904]: E0223 10:07:10.004225 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:10 crc kubenswrapper[4904]: E0223 10:07:10.105048 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:10 crc kubenswrapper[4904]: E0223 10:07:10.205844 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:10 crc kubenswrapper[4904]: I0223 10:07:10.236666 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 15:08:06.112462073 +0000 UTC Feb 23 10:07:10 crc kubenswrapper[4904]: I0223 10:07:10.268680 4904 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:58294->192.168.126.11:10357: read: connection reset by peer" start-of-body= Feb 23 10:07:10 crc kubenswrapper[4904]: I0223 10:07:10.268797 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:58294->192.168.126.11:10357: read: connection reset by peer" Feb 23 10:07:10 crc kubenswrapper[4904]: I0223 10:07:10.268903 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:07:10 crc kubenswrapper[4904]: I0223 10:07:10.269152 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:07:10 crc kubenswrapper[4904]: I0223 10:07:10.271015 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:10 crc kubenswrapper[4904]: I0223 10:07:10.271071 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:10 crc kubenswrapper[4904]: I0223 10:07:10.271096 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:10 crc kubenswrapper[4904]: I0223 10:07:10.272275 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"281e0ebee674b5fc03aceb14229de80bf33c4bb1ca5474f6f9e42c8f30155154"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 23 10:07:10 crc kubenswrapper[4904]: I0223 10:07:10.272491 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://281e0ebee674b5fc03aceb14229de80bf33c4bb1ca5474f6f9e42c8f30155154" gracePeriod=30 Feb 23 10:07:10 crc kubenswrapper[4904]: E0223 10:07:10.306694 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:10 crc kubenswrapper[4904]: E0223 10:07:10.407218 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:10 crc kubenswrapper[4904]: E0223 10:07:10.507571 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:10 crc kubenswrapper[4904]: I0223 10:07:10.516461 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 23 10:07:10 crc kubenswrapper[4904]: I0223 10:07:10.517509 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 23 10:07:10 crc kubenswrapper[4904]: I0223 10:07:10.518062 4904 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="281e0ebee674b5fc03aceb14229de80bf33c4bb1ca5474f6f9e42c8f30155154" exitCode=255 Feb 23 10:07:10 crc kubenswrapper[4904]: I0223 10:07:10.518115 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"281e0ebee674b5fc03aceb14229de80bf33c4bb1ca5474f6f9e42c8f30155154"} Feb 23 10:07:10 crc kubenswrapper[4904]: I0223 10:07:10.518180 4904 scope.go:117] "RemoveContainer" containerID="de5ee9acfaefb66a9c397abfa324c77757c7c2618457b7c1a91b43f705633418" Feb 23 10:07:10 crc kubenswrapper[4904]: E0223 10:07:10.608271 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:10 crc kubenswrapper[4904]: E0223 10:07:10.709285 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:10 crc kubenswrapper[4904]: E0223 10:07:10.809813 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:10 crc kubenswrapper[4904]: E0223 10:07:10.910472 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:11 crc kubenswrapper[4904]: E0223 10:07:11.011301 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:11 crc kubenswrapper[4904]: E0223 10:07:11.111405 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:11 crc kubenswrapper[4904]: E0223 10:07:11.212419 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:11 crc kubenswrapper[4904]: I0223 10:07:11.236948 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 02:50:13.002769281 +0000 UTC Feb 23 10:07:11 crc kubenswrapper[4904]: E0223 10:07:11.312683 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:11 crc kubenswrapper[4904]: E0223 10:07:11.413893 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:11 crc kubenswrapper[4904]: E0223 10:07:11.514888 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:11 crc kubenswrapper[4904]: I0223 10:07:11.522286 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 23 10:07:11 crc kubenswrapper[4904]: I0223 10:07:11.523120 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"96e7ed358f51858d5bd375d03fc518e0630c9669389d55dc19ce8cdd4fc5475d"} Feb 23 10:07:11 crc kubenswrapper[4904]: I0223 10:07:11.523223 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:07:11 crc kubenswrapper[4904]: I0223 10:07:11.523927 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:11 crc kubenswrapper[4904]: I0223 10:07:11.523962 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:11 crc kubenswrapper[4904]: I0223 10:07:11.523976 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:11 crc kubenswrapper[4904]: E0223 10:07:11.615234 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:11 crc kubenswrapper[4904]: E0223 10:07:11.715482 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:11 crc kubenswrapper[4904]: E0223 10:07:11.815849 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:11 crc kubenswrapper[4904]: E0223 10:07:11.916358 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:12 crc kubenswrapper[4904]: E0223 10:07:12.017033 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:12 crc kubenswrapper[4904]: E0223 10:07:12.118033 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:12 crc kubenswrapper[4904]: E0223 10:07:12.218490 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.237920 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 08:27:28.397978282 +0000 UTC Feb 23 10:07:12 crc kubenswrapper[4904]: E0223 10:07:12.319193 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:12 crc kubenswrapper[4904]: E0223 10:07:12.419737 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:12 crc kubenswrapper[4904]: E0223 10:07:12.520267 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.524798 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.525666 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.525702 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.525737 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:12 crc kubenswrapper[4904]: E0223 10:07:12.602906 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.606770 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.606908 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.606997 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.607086 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.607201 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:12Z","lastTransitionTime":"2026-02-23T10:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:12 crc kubenswrapper[4904]: E0223 10:07:12.616866 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.622327 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.622378 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.622390 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.622403 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.622411 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:12Z","lastTransitionTime":"2026-02-23T10:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:12 crc kubenswrapper[4904]: E0223 10:07:12.633441 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.643744 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.643780 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.643794 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.643811 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.643825 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:12Z","lastTransitionTime":"2026-02-23T10:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:12 crc kubenswrapper[4904]: E0223 10:07:12.656799 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.665530 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.665651 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.665755 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.665864 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:12 crc kubenswrapper[4904]: I0223 10:07:12.665971 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:12Z","lastTransitionTime":"2026-02-23T10:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:12 crc kubenswrapper[4904]: E0223 10:07:12.678927 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 10:07:12 crc kubenswrapper[4904]: E0223 10:07:12.679246 4904 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 10:07:12 crc kubenswrapper[4904]: E0223 10:07:12.679360 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:12 crc kubenswrapper[4904]: E0223 10:07:12.780319 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:12 crc kubenswrapper[4904]: E0223 10:07:12.881021 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:12 crc kubenswrapper[4904]: E0223 10:07:12.981959 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:13 crc kubenswrapper[4904]: E0223 10:07:13.083166 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:13 crc kubenswrapper[4904]: E0223 10:07:13.183287 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:13 crc kubenswrapper[4904]: I0223 10:07:13.238599 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 18:11:57.047348431 +0000 UTC Feb 23 10:07:13 crc kubenswrapper[4904]: E0223 10:07:13.284269 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:13 crc kubenswrapper[4904]: E0223 10:07:13.385390 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:13 crc kubenswrapper[4904]: E0223 10:07:13.486042 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:13 crc kubenswrapper[4904]: E0223 10:07:13.586678 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:13 crc kubenswrapper[4904]: E0223 10:07:13.687585 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:13 crc kubenswrapper[4904]: E0223 10:07:13.788117 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:13 crc kubenswrapper[4904]: E0223 10:07:13.888678 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:13 crc kubenswrapper[4904]: E0223 10:07:13.989254 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:14 crc kubenswrapper[4904]: E0223 10:07:14.090271 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:14 crc kubenswrapper[4904]: E0223 10:07:14.191076 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:14 crc kubenswrapper[4904]: I0223 10:07:14.239144 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 16:01:22.393489955 +0000 UTC Feb 23 10:07:14 crc kubenswrapper[4904]: E0223 10:07:14.291875 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:14 crc kubenswrapper[4904]: E0223 10:07:14.392723 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:14 crc kubenswrapper[4904]: E0223 10:07:14.493497 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:14 crc kubenswrapper[4904]: E0223 10:07:14.593991 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:14 crc kubenswrapper[4904]: E0223 10:07:14.694759 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:14 crc kubenswrapper[4904]: E0223 10:07:14.795428 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:14 crc kubenswrapper[4904]: E0223 10:07:14.895936 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:14 crc kubenswrapper[4904]: E0223 10:07:14.996423 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:15 crc kubenswrapper[4904]: E0223 10:07:15.096946 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:15 crc kubenswrapper[4904]: E0223 10:07:15.197576 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:15 crc kubenswrapper[4904]: I0223 10:07:15.240084 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 13:46:30.699632295 +0000 UTC Feb 23 10:07:15 crc kubenswrapper[4904]: E0223 10:07:15.297728 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:15 crc kubenswrapper[4904]: E0223 10:07:15.397932 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:15 crc kubenswrapper[4904]: E0223 10:07:15.498430 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:15 crc kubenswrapper[4904]: E0223 10:07:15.598743 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:15 crc kubenswrapper[4904]: E0223 10:07:15.699519 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:15 crc kubenswrapper[4904]: E0223 10:07:15.800020 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:15 crc kubenswrapper[4904]: E0223 10:07:15.900586 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:16 crc kubenswrapper[4904]: E0223 10:07:16.001241 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:16 crc kubenswrapper[4904]: E0223 10:07:16.102334 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:16 crc kubenswrapper[4904]: E0223 10:07:16.203082 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:16 crc kubenswrapper[4904]: I0223 10:07:16.240631 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 02:21:20.208059993 +0000 UTC Feb 23 10:07:16 crc kubenswrapper[4904]: E0223 10:07:16.303352 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:16 crc kubenswrapper[4904]: E0223 10:07:16.403691 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:16 crc kubenswrapper[4904]: E0223 10:07:16.504543 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:16 crc kubenswrapper[4904]: E0223 10:07:16.605620 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:16 crc kubenswrapper[4904]: E0223 10:07:16.706410 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:16 crc kubenswrapper[4904]: E0223 10:07:16.806919 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:16 crc kubenswrapper[4904]: E0223 10:07:16.907626 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:17 crc kubenswrapper[4904]: E0223 10:07:17.008377 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:17 crc kubenswrapper[4904]: E0223 10:07:17.108688 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:17 crc kubenswrapper[4904]: E0223 10:07:17.209778 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:17 crc kubenswrapper[4904]: I0223 10:07:17.241183 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 14:51:59.18982532 +0000 UTC Feb 23 10:07:17 crc kubenswrapper[4904]: E0223 10:07:17.310872 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:17 crc kubenswrapper[4904]: E0223 10:07:17.339224 4904 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 10:07:17 crc kubenswrapper[4904]: E0223 10:07:17.411933 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:17 crc kubenswrapper[4904]: E0223 10:07:17.512762 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:17 crc kubenswrapper[4904]: E0223 10:07:17.613345 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:17 crc kubenswrapper[4904]: E0223 10:07:17.714413 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:17 crc kubenswrapper[4904]: I0223 10:07:17.720732 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:07:17 crc kubenswrapper[4904]: I0223 10:07:17.720954 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:07:17 crc kubenswrapper[4904]: I0223 10:07:17.722066 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:17 crc kubenswrapper[4904]: I0223 10:07:17.722100 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:17 crc kubenswrapper[4904]: I0223 10:07:17.722111 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:17 crc kubenswrapper[4904]: I0223 10:07:17.727545 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:07:17 crc kubenswrapper[4904]: E0223 10:07:17.815153 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:17 crc kubenswrapper[4904]: E0223 10:07:17.916023 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:18 crc kubenswrapper[4904]: I0223 10:07:18.002685 4904 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 10:07:18 crc kubenswrapper[4904]: E0223 10:07:18.016332 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:18 crc kubenswrapper[4904]: I0223 10:07:18.019799 4904 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 23 10:07:18 crc kubenswrapper[4904]: I0223 10:07:18.037651 4904 csr.go:261] certificate signing request csr-k6t8m is approved, waiting to be issued Feb 23 10:07:18 crc kubenswrapper[4904]: I0223 10:07:18.046902 4904 csr.go:257] certificate signing request csr-k6t8m is issued Feb 23 10:07:18 crc kubenswrapper[4904]: E0223 10:07:18.116421 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:18 crc kubenswrapper[4904]: E0223 10:07:18.216557 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:18 crc kubenswrapper[4904]: I0223 10:07:18.241919 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 19:18:05.261172073 +0000 UTC Feb 23 10:07:18 crc kubenswrapper[4904]: I0223 10:07:18.255466 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:07:18 crc kubenswrapper[4904]: I0223 10:07:18.256834 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:18 crc kubenswrapper[4904]: I0223 10:07:18.256932 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:18 crc kubenswrapper[4904]: I0223 10:07:18.256963 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:18 crc kubenswrapper[4904]: I0223 10:07:18.258398 4904 scope.go:117] "RemoveContainer" containerID="b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f" Feb 23 10:07:18 crc kubenswrapper[4904]: E0223 10:07:18.258848 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 10:07:18 crc kubenswrapper[4904]: E0223 10:07:18.317318 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:18 crc kubenswrapper[4904]: E0223 10:07:18.418442 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:18 crc kubenswrapper[4904]: E0223 10:07:18.519373 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:18 crc kubenswrapper[4904]: I0223 10:07:18.536797 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:07:18 crc kubenswrapper[4904]: I0223 10:07:18.537137 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:07:18 crc kubenswrapper[4904]: I0223 10:07:18.537861 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:18 crc kubenswrapper[4904]: I0223 10:07:18.537901 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:18 crc kubenswrapper[4904]: I0223 10:07:18.537914 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:18 crc kubenswrapper[4904]: E0223 10:07:18.620181 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:18 crc kubenswrapper[4904]: E0223 10:07:18.721014 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:18 crc kubenswrapper[4904]: E0223 10:07:18.821610 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:18 crc kubenswrapper[4904]: E0223 10:07:18.922103 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:19 crc kubenswrapper[4904]: E0223 10:07:19.022681 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:19 crc kubenswrapper[4904]: I0223 10:07:19.047955 4904 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-23 10:02:18 +0000 UTC, rotation deadline is 2026-11-10 17:31:35.125811903 +0000 UTC Feb 23 10:07:19 crc kubenswrapper[4904]: I0223 10:07:19.047996 4904 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6247h24m16.077819325s for next certificate rotation Feb 23 10:07:19 crc kubenswrapper[4904]: E0223 10:07:19.123687 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:19 crc kubenswrapper[4904]: E0223 10:07:19.224540 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:19 crc kubenswrapper[4904]: I0223 10:07:19.242900 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 07:19:47.901804354 +0000 UTC Feb 23 10:07:19 crc kubenswrapper[4904]: E0223 10:07:19.325022 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:19 crc kubenswrapper[4904]: E0223 10:07:19.425518 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:19 crc kubenswrapper[4904]: E0223 10:07:19.525764 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:19 crc kubenswrapper[4904]: I0223 10:07:19.538910 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:07:19 crc kubenswrapper[4904]: I0223 10:07:19.539703 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:19 crc kubenswrapper[4904]: I0223 10:07:19.539750 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:19 crc kubenswrapper[4904]: I0223 10:07:19.539762 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:19 crc kubenswrapper[4904]: E0223 10:07:19.626006 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:19 crc kubenswrapper[4904]: E0223 10:07:19.726121 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:19 crc kubenswrapper[4904]: E0223 10:07:19.826987 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:19 crc kubenswrapper[4904]: E0223 10:07:19.927793 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:20 crc kubenswrapper[4904]: E0223 10:07:20.028309 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:20 crc kubenswrapper[4904]: E0223 10:07:20.129050 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:20 crc kubenswrapper[4904]: E0223 10:07:20.230108 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:20 crc kubenswrapper[4904]: I0223 10:07:20.243632 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 13:12:18.395660083 +0000 UTC Feb 23 10:07:20 crc kubenswrapper[4904]: E0223 10:07:20.330637 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:20 crc kubenswrapper[4904]: E0223 10:07:20.431467 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:20 crc kubenswrapper[4904]: E0223 10:07:20.531738 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:20 crc kubenswrapper[4904]: E0223 10:07:20.632078 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:20 crc kubenswrapper[4904]: E0223 10:07:20.732301 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:20 crc kubenswrapper[4904]: E0223 10:07:20.832941 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:20 crc kubenswrapper[4904]: E0223 10:07:20.933699 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:21 crc kubenswrapper[4904]: E0223 10:07:21.033864 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:21 crc kubenswrapper[4904]: E0223 10:07:21.134975 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:21 crc kubenswrapper[4904]: E0223 10:07:21.235314 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.244631 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 04:35:25.740914479 +0000 UTC Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.254379 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.254533 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.255852 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.255880 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.255888 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:21 crc kubenswrapper[4904]: E0223 10:07:21.336407 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.433050 4904 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 23 10:07:21 crc kubenswrapper[4904]: E0223 10:07:21.437078 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.490167 4904 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.539407 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.539444 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.539456 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.539475 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.539487 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:21Z","lastTransitionTime":"2026-02-23T10:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.642371 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.642402 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.642411 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.642426 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.642436 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:21Z","lastTransitionTime":"2026-02-23T10:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.744947 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.744975 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.744985 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.745000 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.745011 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:21Z","lastTransitionTime":"2026-02-23T10:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.847605 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.847641 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.847656 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.847671 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.847681 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:21Z","lastTransitionTime":"2026-02-23T10:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.949993 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.950032 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.950042 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.950058 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:21 crc kubenswrapper[4904]: I0223 10:07:21.950069 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:21Z","lastTransitionTime":"2026-02-23T10:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.052104 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.052153 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.052162 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.052175 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.052187 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:22Z","lastTransitionTime":"2026-02-23T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.154133 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.154161 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.154170 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.154183 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.154191 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:22Z","lastTransitionTime":"2026-02-23T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.208078 4904 apiserver.go:52] "Watching apiserver" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.211397 4904 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.211685 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.212035 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.212067 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.212098 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.212170 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.212173 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.212263 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.212331 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.212370 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.212474 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.214029 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.214093 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.214207 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.214331 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.214415 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.214422 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.214520 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.214599 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.215495 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.244953 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 18:11:21.400968463 +0000 UTC Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.250215 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.256385 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.256426 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.256442 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.256459 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.256470 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:22Z","lastTransitionTime":"2026-02-23T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.262601 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.275557 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.284242 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.293542 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.296489 4904 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.301998 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.310060 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.318977 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.358278 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.358512 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.358574 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.358659 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.358738 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:22Z","lastTransitionTime":"2026-02-23T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.392623 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.392657 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.392675 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.392692 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.392706 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.392738 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.392754 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.393062 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.393101 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.393233 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.393427 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.393642 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.393648 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.393902 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394185 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394210 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394227 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394242 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394259 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394278 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394294 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394315 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394333 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394350 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394366 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394382 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394398 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394488 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394510 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394529 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394546 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394562 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394577 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394588 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394594 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394638 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394651 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394706 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394749 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394774 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394780 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394786 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394796 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394860 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394889 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394914 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394936 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394962 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394983 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.394986 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.395004 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.395025 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.395081 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.395085 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:07:22.895066022 +0000 UTC m=+76.315439635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.395097 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.395118 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.395141 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.395162 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.395186 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.395207 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.395223 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.395088 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.395355 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.395443 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.395509 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.395524 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.395621 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.395623 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.395622 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.395905 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.395947 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.395978 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.395988 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.395229 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396032 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396058 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396081 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396103 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396125 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396146 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396166 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396188 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396214 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396210 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396261 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396217 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396287 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396311 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396324 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396333 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396430 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396437 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396456 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396462 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396488 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396517 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396541 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396567 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396592 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396614 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396636 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396661 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396684 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396759 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396793 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.396805 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.397036 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.397069 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.397098 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.397258 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.397334 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.397602 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.397641 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.397669 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.397694 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.397733 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.397755 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.397776 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.397805 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.397835 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.397858 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.397882 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.397906 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.397930 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.397953 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.397974 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398000 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398011 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398035 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398061 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398064 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398086 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398091 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398110 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398150 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398155 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398199 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398226 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398279 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398302 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398327 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398352 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398373 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398395 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398418 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398475 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398582 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398605 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398628 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398652 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398673 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398661 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398698 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398743 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398771 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398793 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398816 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398839 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398864 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398885 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398899 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398907 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398947 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.398974 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.399003 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.399023 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.399044 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.399066 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.399085 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.399107 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.399123 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.399132 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.399155 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.399173 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.399201 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.399224 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.399272 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.399320 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.399348 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.399937 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.399967 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.399991 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400015 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400037 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400057 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400080 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400101 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400124 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400145 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400169 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400190 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400210 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400232 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400254 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400278 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400300 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400324 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400347 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400371 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400394 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400416 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400436 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400456 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400475 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400496 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400521 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400545 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400569 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400592 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400615 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400638 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400660 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400684 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400729 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400754 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400775 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400795 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400817 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400838 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400859 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400883 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400911 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400933 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400954 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400977 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401002 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401028 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401051 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401078 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401099 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401117 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401132 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401147 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401166 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401188 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401223 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401248 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401273 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401296 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401314 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401331 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401347 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401364 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401381 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401396 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401434 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401485 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401503 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401522 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401539 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401556 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401576 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401592 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401607 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401626 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401643 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401660 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401675 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401692 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401750 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401763 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401772 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401782 4904 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401793 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401806 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401819 4904 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401832 4904 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401845 4904 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401859 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401872 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401886 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401899 4904 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401912 4904 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401943 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401959 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401972 4904 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401985 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401997 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.402010 4904 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.402021 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.402034 4904 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.402047 4904 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.402058 4904 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.402070 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.402083 4904 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.402095 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.399553 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.399608 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.399702 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.399959 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400101 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400188 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400306 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400325 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400355 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400528 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400589 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400595 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.400620 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401291 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401651 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401730 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.401891 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.402069 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.402101 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.402441 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.402454 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.402464 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.402488 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.402662 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.402782 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.402908 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.403007 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.403062 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.403070 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.403161 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.403231 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.403362 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.403414 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.403474 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.403662 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.403682 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.403890 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.404170 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.404545 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.404776 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.404878 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.405188 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.405556 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.405779 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.405868 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.405940 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.407396 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.402109 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.411351 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.411366 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.411376 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.411387 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.411400 4904 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.411410 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.411422 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.411434 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.411445 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.411455 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.411465 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.411476 4904 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.411487 4904 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.411498 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.411509 4904 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.411522 4904 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.411535 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.411546 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.411558 4904 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.411569 4904 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.411580 4904 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.411917 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.412053 4904 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.412125 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 10:07:22.912103528 +0000 UTC m=+76.332477151 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.412953 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.413949 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.414620 4904 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.415762 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.415968 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.419927 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.420000 4904 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.420032 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 10:07:22.92002223 +0000 UTC m=+76.340395743 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.427391 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.427569 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.427793 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.427815 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.427827 4904 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.427868 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 10:07:22.927858469 +0000 UTC m=+76.348231982 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.427907 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.427916 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.427922 4904 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.427944 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 10:07:22.927936831 +0000 UTC m=+76.348310464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.428198 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.428501 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.428803 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.431245 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.433390 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.438619 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.440730 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.443905 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.444173 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.444187 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.445958 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.446011 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.445779 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.446157 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.446306 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.446320 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.446498 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.446632 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.446775 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.446792 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.447208 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.447254 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.447812 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.447851 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.448107 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.448477 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.448522 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.449001 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.449111 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.449117 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.449201 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.449640 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.450520 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.451021 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.451238 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.451239 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.451264 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.451306 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.451408 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.452205 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.452268 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.452452 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.452967 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.453014 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.453147 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.453161 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.453359 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.453404 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.453558 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.453659 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.453825 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.453803 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.454037 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.454064 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.454070 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.454148 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.454185 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.454250 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.454406 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.454469 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.454565 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.454573 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.454571 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.454677 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.454699 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.454705 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.454734 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.454700 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.454758 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.454770 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.454790 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.454953 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.455906 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.455941 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.456017 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.456130 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.456210 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.456361 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.456896 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.457846 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.457901 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.458127 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.458145 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.458157 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.458174 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.458429 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.458440 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.458451 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.458600 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.458600 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.458670 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.458743 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.458817 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.459830 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.461599 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.461638 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.461651 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.461668 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.461680 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:22Z","lastTransitionTime":"2026-02-23T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.469813 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.473054 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.484668 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.487589 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512313 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512350 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512746 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512763 4904 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512772 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512781 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512791 4904 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512799 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512808 4904 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512816 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512825 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512834 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512843 4904 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512852 4904 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512860 4904 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512869 4904 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512877 4904 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512885 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512896 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512905 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512913 4904 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512922 4904 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512929 4904 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512939 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512948 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512956 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512964 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512972 4904 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512981 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512990 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512998 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513005 4904 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512999 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513014 4904 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513079 4904 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513095 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513108 4904 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513122 4904 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513136 4904 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.512924 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513150 4904 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513187 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513197 4904 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513206 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513215 4904 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513224 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513232 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513244 4904 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513254 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513264 4904 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513273 4904 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513283 4904 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513292 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513300 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513309 4904 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513317 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513326 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513334 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513342 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513351 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513359 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513369 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513377 4904 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513386 4904 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513394 4904 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513448 4904 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513456 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513464 4904 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513472 4904 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513480 4904 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513488 4904 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513496 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513505 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513513 4904 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513520 4904 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513528 4904 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513535 4904 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513543 4904 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513552 4904 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513559 4904 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513567 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513576 4904 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513584 4904 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513592 4904 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513600 4904 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513608 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513616 4904 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513623 4904 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513639 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513647 4904 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513656 4904 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513666 4904 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513675 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513682 4904 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513690 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513697 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513706 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513725 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513734 4904 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513742 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513750 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513758 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513766 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513774 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513782 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513790 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513804 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513812 4904 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513821 4904 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513829 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513836 4904 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513847 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513855 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513863 4904 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513872 4904 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513880 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513887 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513895 4904 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513906 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513914 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513922 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513930 4904 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513939 4904 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513946 4904 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513954 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513961 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513969 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513977 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513986 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.513993 4904 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.514001 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.514008 4904 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.514016 4904 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.514023 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.514031 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.514038 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.514046 4904 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.514053 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.514061 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.514068 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.514076 4904 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.514083 4904 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.514091 4904 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.514099 4904 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.514107 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.514114 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.514122 4904 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.514130 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.514137 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.548132 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.556936 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.563279 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.563689 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.563702 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.563746 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.563759 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:22Z","lastTransitionTime":"2026-02-23T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.563950 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 10:07:22 crc kubenswrapper[4904]: W0223 10:07:22.566986 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-a2821ca3cc6134e4fef34db16f7493b78af8472c80a826b3bdb1922fb189c80a WatchSource:0}: Error finding container a2821ca3cc6134e4fef34db16f7493b78af8472c80a826b3bdb1922fb189c80a: Status 404 returned error can't find the container with id a2821ca3cc6134e4fef34db16f7493b78af8472c80a826b3bdb1922fb189c80a Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.666035 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.666071 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.666080 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.666094 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.666104 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:22Z","lastTransitionTime":"2026-02-23T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.768231 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.768261 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.768282 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.768297 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.768308 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:22Z","lastTransitionTime":"2026-02-23T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.790673 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.790702 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.790728 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.790742 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.790753 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:22Z","lastTransitionTime":"2026-02-23T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.801023 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.807554 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.807590 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.807600 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.807615 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.807630 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:22Z","lastTransitionTime":"2026-02-23T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.816909 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.819758 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.819791 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.819801 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.819810 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.819819 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:22Z","lastTransitionTime":"2026-02-23T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.827960 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.830657 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.830685 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.830698 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.830709 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.830729 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:22Z","lastTransitionTime":"2026-02-23T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.838636 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.842652 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.842695 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.842706 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.842737 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.842750 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:22Z","lastTransitionTime":"2026-02-23T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.851550 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.851655 4904 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.870982 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.871034 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.871045 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.871064 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.871078 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:22Z","lastTransitionTime":"2026-02-23T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.918803 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.918894 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.918955 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:07:23.918928992 +0000 UTC m=+77.339302495 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.919027 4904 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 10:07:22 crc kubenswrapper[4904]: E0223 10:07:22.919087 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 10:07:23.919073166 +0000 UTC m=+77.339446679 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.973564 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.973602 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.973611 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.973623 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:22 crc kubenswrapper[4904]: I0223 10:07:22.973631 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:22Z","lastTransitionTime":"2026-02-23T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.019381 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.019424 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.019442 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:23 crc kubenswrapper[4904]: E0223 10:07:23.019544 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 10:07:23 crc kubenswrapper[4904]: E0223 10:07:23.019560 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 10:07:23 crc kubenswrapper[4904]: E0223 10:07:23.019562 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 10:07:23 crc kubenswrapper[4904]: E0223 10:07:23.019571 4904 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:23 crc kubenswrapper[4904]: E0223 10:07:23.019582 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 10:07:23 crc kubenswrapper[4904]: E0223 10:07:23.019593 4904 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:23 crc kubenswrapper[4904]: E0223 10:07:23.019566 4904 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 10:07:23 crc kubenswrapper[4904]: E0223 10:07:23.019633 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 10:07:24.019606968 +0000 UTC m=+77.439980481 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:23 crc kubenswrapper[4904]: E0223 10:07:23.019652 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 10:07:24.019639919 +0000 UTC m=+77.440013432 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 10:07:23 crc kubenswrapper[4904]: E0223 10:07:23.019667 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 10:07:24.019660779 +0000 UTC m=+77.440034292 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.076366 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.076416 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.076426 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.076447 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.076459 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:23Z","lastTransitionTime":"2026-02-23T10:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.180043 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.180096 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.180109 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.180131 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.180145 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:23Z","lastTransitionTime":"2026-02-23T10:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.246279 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 22:56:00.395514424 +0000 UTC Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.260886 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.261493 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.262357 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.262989 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.263575 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.264092 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.265749 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.266268 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.267213 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.267790 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.268639 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.269415 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.270274 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.270829 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.271669 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.272193 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.272863 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.273602 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.274171 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.274766 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.276021 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.276910 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.278187 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.279417 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.280221 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.281858 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.282905 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.283001 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.283036 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.283062 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.283075 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.283106 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:23Z","lastTransitionTime":"2026-02-23T10:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.283559 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.284177 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.285099 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.285541 4904 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.285642 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.287794 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.288268 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.288656 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.290275 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.291288 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.291854 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.292818 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.293473 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.294486 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.295122 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.296127 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.297150 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.297599 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.298761 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.299299 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.300344 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.300835 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.301309 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.302276 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.302879 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.303471 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.303972 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.386079 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.386122 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.386133 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.386147 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.386157 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:23Z","lastTransitionTime":"2026-02-23T10:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.488703 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.488768 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.488778 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.488794 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.488803 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:23Z","lastTransitionTime":"2026-02-23T10:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.549484 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a2821ca3cc6134e4fef34db16f7493b78af8472c80a826b3bdb1922fb189c80a"} Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.551263 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f"} Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.551290 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6b315d119047713fce6d4f26fa1fbe17aea8804a6a3cbbb631ca8c439e4cdacd"} Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.553758 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d"} Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.553822 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835"} Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.553837 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3b164f3abc4c8e4cecfbe946d17a3305e660760cbcc7283604b779103ce34123"} Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.565226 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.578665 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.591369 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.591402 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.591412 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.591427 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.591437 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:23Z","lastTransitionTime":"2026-02-23T10:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.594091 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.608465 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.620091 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.631767 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.645868 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.660627 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.675367 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.685522 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.695537 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.695569 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.695579 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.695595 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.695604 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:23Z","lastTransitionTime":"2026-02-23T10:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.696212 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.708027 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.798291 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.798325 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.798337 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.798352 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.798362 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:23Z","lastTransitionTime":"2026-02-23T10:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.900940 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.900987 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.900998 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.901014 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.901028 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:23Z","lastTransitionTime":"2026-02-23T10:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.924258 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:07:23 crc kubenswrapper[4904]: I0223 10:07:23.924362 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:23 crc kubenswrapper[4904]: E0223 10:07:23.924443 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:07:25.924417222 +0000 UTC m=+79.344790735 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:07:23 crc kubenswrapper[4904]: E0223 10:07:23.924514 4904 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 10:07:23 crc kubenswrapper[4904]: E0223 10:07:23.924595 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 10:07:25.924576236 +0000 UTC m=+79.344949849 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.003915 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.003981 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.003992 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.004029 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.004041 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:24Z","lastTransitionTime":"2026-02-23T10:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.025651 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.025722 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.025754 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:24 crc kubenswrapper[4904]: E0223 10:07:24.025883 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 10:07:24 crc kubenswrapper[4904]: E0223 10:07:24.025904 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 10:07:24 crc kubenswrapper[4904]: E0223 10:07:24.025903 4904 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 10:07:24 crc kubenswrapper[4904]: E0223 10:07:24.025949 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 10:07:24 crc kubenswrapper[4904]: E0223 10:07:24.025983 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 10:07:24 crc kubenswrapper[4904]: E0223 10:07:24.025995 4904 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:24 crc kubenswrapper[4904]: E0223 10:07:24.025918 4904 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:24 crc kubenswrapper[4904]: E0223 10:07:24.025995 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 10:07:26.025976612 +0000 UTC m=+79.446350125 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 10:07:24 crc kubenswrapper[4904]: E0223 10:07:24.026100 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 10:07:26.026086525 +0000 UTC m=+79.446460038 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:24 crc kubenswrapper[4904]: E0223 10:07:24.026116 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 10:07:26.026108845 +0000 UTC m=+79.446482508 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.106371 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.106408 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.106422 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.106438 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.106447 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:24Z","lastTransitionTime":"2026-02-23T10:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.208925 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.208965 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.208976 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.208992 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.209003 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:24Z","lastTransitionTime":"2026-02-23T10:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.246608 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 22:23:56.393120923 +0000 UTC Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.254898 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.254957 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.254898 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:24 crc kubenswrapper[4904]: E0223 10:07:24.255025 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:07:24 crc kubenswrapper[4904]: E0223 10:07:24.255156 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:07:24 crc kubenswrapper[4904]: E0223 10:07:24.255214 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.311198 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.311241 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.311256 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.311296 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.311309 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:24Z","lastTransitionTime":"2026-02-23T10:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.413667 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.413693 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.413739 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.413752 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.413761 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:24Z","lastTransitionTime":"2026-02-23T10:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.515387 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.515665 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.515786 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.515893 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.515978 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:24Z","lastTransitionTime":"2026-02-23T10:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.618876 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.619106 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.619164 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.619275 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.619354 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:24Z","lastTransitionTime":"2026-02-23T10:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.721461 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.721504 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.721516 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.721534 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.721553 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:24Z","lastTransitionTime":"2026-02-23T10:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.823977 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.824010 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.824019 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.824033 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.824045 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:24Z","lastTransitionTime":"2026-02-23T10:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.926332 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.926382 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.926398 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.926424 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:24 crc kubenswrapper[4904]: I0223 10:07:24.926443 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:24Z","lastTransitionTime":"2026-02-23T10:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.029043 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.029069 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.029078 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.029093 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.029101 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:25Z","lastTransitionTime":"2026-02-23T10:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.131740 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.132044 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.132054 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.132068 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.132078 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:25Z","lastTransitionTime":"2026-02-23T10:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.233993 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.234026 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.234034 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.234049 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.234058 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:25Z","lastTransitionTime":"2026-02-23T10:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.247721 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 22:22:03.501880961 +0000 UTC Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.336310 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.336343 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.336352 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.336367 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.336377 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:25Z","lastTransitionTime":"2026-02-23T10:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.438601 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.438637 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.438649 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.438665 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.438676 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:25Z","lastTransitionTime":"2026-02-23T10:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.541288 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.541348 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.541375 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.541406 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.541429 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:25Z","lastTransitionTime":"2026-02-23T10:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.559617 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f"} Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.579411 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:25Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.596055 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:25Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.614684 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:25Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.634060 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:25Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.643771 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.643818 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.643829 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.643847 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.643861 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:25Z","lastTransitionTime":"2026-02-23T10:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.650637 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:25Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.667196 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:25Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.746689 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.746756 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.746766 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.746781 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.746792 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:25Z","lastTransitionTime":"2026-02-23T10:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.850514 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.850556 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.850568 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.850585 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.850599 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:25Z","lastTransitionTime":"2026-02-23T10:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.941478 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.941626 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:25 crc kubenswrapper[4904]: E0223 10:07:25.941685 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:07:29.941657884 +0000 UTC m=+83.362031397 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:07:25 crc kubenswrapper[4904]: E0223 10:07:25.941804 4904 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 10:07:25 crc kubenswrapper[4904]: E0223 10:07:25.941920 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 10:07:29.94188811 +0000 UTC m=+83.362261693 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.952987 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.953013 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.953041 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.953055 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:25 crc kubenswrapper[4904]: I0223 10:07:25.953065 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:25Z","lastTransitionTime":"2026-02-23T10:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.042358 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.042419 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.042444 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:26 crc kubenswrapper[4904]: E0223 10:07:26.042576 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 10:07:26 crc kubenswrapper[4904]: E0223 10:07:26.042597 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 10:07:26 crc kubenswrapper[4904]: E0223 10:07:26.042610 4904 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:26 crc kubenswrapper[4904]: E0223 10:07:26.042632 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 10:07:26 crc kubenswrapper[4904]: E0223 10:07:26.042665 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 10:07:30.042650278 +0000 UTC m=+83.463023811 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:26 crc kubenswrapper[4904]: E0223 10:07:26.042671 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 10:07:26 crc kubenswrapper[4904]: E0223 10:07:26.042691 4904 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:26 crc kubenswrapper[4904]: E0223 10:07:26.042788 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 10:07:30.042767561 +0000 UTC m=+83.463141114 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:26 crc kubenswrapper[4904]: E0223 10:07:26.042642 4904 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 10:07:26 crc kubenswrapper[4904]: E0223 10:07:26.042881 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 10:07:30.042863264 +0000 UTC m=+83.463236777 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.054810 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.054857 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.054879 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.054909 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.054925 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:26Z","lastTransitionTime":"2026-02-23T10:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.157079 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.157109 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.157118 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.157131 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.157141 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:26Z","lastTransitionTime":"2026-02-23T10:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.248638 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 21:54:24.358274922 +0000 UTC Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.254809 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.254809 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:26 crc kubenswrapper[4904]: E0223 10:07:26.255012 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:07:26 crc kubenswrapper[4904]: E0223 10:07:26.255134 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.254878 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:26 crc kubenswrapper[4904]: E0223 10:07:26.255385 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.259384 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.259420 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.259432 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.259446 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.259458 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:26Z","lastTransitionTime":"2026-02-23T10:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.361520 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.361553 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.361564 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.361577 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.361603 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:26Z","lastTransitionTime":"2026-02-23T10:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.463834 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.463906 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.463931 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.463973 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.464002 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:26Z","lastTransitionTime":"2026-02-23T10:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.566550 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.566608 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.566620 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.566653 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.566666 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:26Z","lastTransitionTime":"2026-02-23T10:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.669419 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.669491 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.669504 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.669541 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.669554 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:26Z","lastTransitionTime":"2026-02-23T10:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.772410 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.772485 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.772504 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.772527 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.772545 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:26Z","lastTransitionTime":"2026-02-23T10:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.876461 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.876506 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.876518 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.876536 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.876549 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:26Z","lastTransitionTime":"2026-02-23T10:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.980434 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.980500 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.980516 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.980540 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:26 crc kubenswrapper[4904]: I0223 10:07:26.980554 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:26Z","lastTransitionTime":"2026-02-23T10:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.027577 4904 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.083544 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.083594 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.083605 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.083624 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.083638 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:27Z","lastTransitionTime":"2026-02-23T10:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.186935 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.187291 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.187388 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.187642 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.187841 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:27Z","lastTransitionTime":"2026-02-23T10:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.249013 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 01:32:50.678163928 +0000 UTC Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.271785 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:27Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.284243 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:27Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.289996 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.290028 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.290038 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.290053 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.290064 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:27Z","lastTransitionTime":"2026-02-23T10:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.297369 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:27Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.310280 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:27Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.322484 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:27Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.333644 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:27Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.391732 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.391770 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.391779 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.391796 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.391806 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:27Z","lastTransitionTime":"2026-02-23T10:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.494911 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.495132 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.495249 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.495372 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.495555 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:27Z","lastTransitionTime":"2026-02-23T10:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.601178 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.601471 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.601572 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.601683 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.601802 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:27Z","lastTransitionTime":"2026-02-23T10:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.703830 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.703874 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.703883 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.703902 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.703914 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:27Z","lastTransitionTime":"2026-02-23T10:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.806229 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.806267 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.806275 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.806289 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.806298 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:27Z","lastTransitionTime":"2026-02-23T10:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.908849 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.909110 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.909220 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.909323 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:27 crc kubenswrapper[4904]: I0223 10:07:27.909414 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:27Z","lastTransitionTime":"2026-02-23T10:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.011945 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.012190 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.012263 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.012326 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.012393 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:28Z","lastTransitionTime":"2026-02-23T10:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.114168 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.114202 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.114213 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.114227 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.114238 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:28Z","lastTransitionTime":"2026-02-23T10:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.216538 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.216566 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.216575 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.216587 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.216596 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:28Z","lastTransitionTime":"2026-02-23T10:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.249734 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 07:18:55.415142658 +0000 UTC Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.255024 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:28 crc kubenswrapper[4904]: E0223 10:07:28.255229 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.255078 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:28 crc kubenswrapper[4904]: E0223 10:07:28.255478 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.255059 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:28 crc kubenswrapper[4904]: E0223 10:07:28.255683 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.319181 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.319229 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.319251 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.319281 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.319299 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:28Z","lastTransitionTime":"2026-02-23T10:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.421857 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.421909 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.421920 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.421935 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.421945 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:28Z","lastTransitionTime":"2026-02-23T10:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.523638 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.523677 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.523686 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.523701 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.523725 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:28Z","lastTransitionTime":"2026-02-23T10:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.626236 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.626269 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.626276 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.626291 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.626301 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:28Z","lastTransitionTime":"2026-02-23T10:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.727937 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.727974 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.727983 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.727997 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.728007 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:28Z","lastTransitionTime":"2026-02-23T10:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.830525 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.830563 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.830576 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.830592 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.830603 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:28Z","lastTransitionTime":"2026-02-23T10:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.933359 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.933406 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.933423 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.933443 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:28 crc kubenswrapper[4904]: I0223 10:07:28.933458 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:28Z","lastTransitionTime":"2026-02-23T10:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.036051 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.036100 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.036115 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.036136 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.036152 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:29Z","lastTransitionTime":"2026-02-23T10:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.139097 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.139140 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.139151 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.139169 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.139180 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:29Z","lastTransitionTime":"2026-02-23T10:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.241919 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.241955 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.241967 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.241981 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.241992 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:29Z","lastTransitionTime":"2026-02-23T10:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.250606 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 08:45:51.16711816 +0000 UTC Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.253532 4904 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.344446 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.344483 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.344494 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.344512 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.344528 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:29Z","lastTransitionTime":"2026-02-23T10:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.447036 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.447075 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.447089 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.447104 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.447116 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:29Z","lastTransitionTime":"2026-02-23T10:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.549645 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.549679 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.549687 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.549700 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.549724 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:29Z","lastTransitionTime":"2026-02-23T10:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.651608 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.651646 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.651657 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.651673 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.651684 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:29Z","lastTransitionTime":"2026-02-23T10:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.753373 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.753408 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.753436 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.753450 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.753458 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:29Z","lastTransitionTime":"2026-02-23T10:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.855464 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.855493 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.855501 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.855513 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.855521 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:29Z","lastTransitionTime":"2026-02-23T10:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.958500 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.958568 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.958590 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.958619 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.958640 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:29Z","lastTransitionTime":"2026-02-23T10:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.979022 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:07:29 crc kubenswrapper[4904]: E0223 10:07:29.979253 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:07:37.979210016 +0000 UTC m=+91.399583569 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:07:29 crc kubenswrapper[4904]: I0223 10:07:29.979373 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:29 crc kubenswrapper[4904]: E0223 10:07:29.979554 4904 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 10:07:29 crc kubenswrapper[4904]: E0223 10:07:29.979649 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 10:07:37.979627027 +0000 UTC m=+91.400000590 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.061128 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.061174 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.061187 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.061205 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.061219 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:30Z","lastTransitionTime":"2026-02-23T10:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.080706 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.080803 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.080831 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:30 crc kubenswrapper[4904]: E0223 10:07:30.080939 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 10:07:30 crc kubenswrapper[4904]: E0223 10:07:30.080953 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 10:07:30 crc kubenswrapper[4904]: E0223 10:07:30.080963 4904 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:30 crc kubenswrapper[4904]: E0223 10:07:30.081012 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 10:07:38.080999542 +0000 UTC m=+91.501373055 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:30 crc kubenswrapper[4904]: E0223 10:07:30.081040 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 10:07:30 crc kubenswrapper[4904]: E0223 10:07:30.081091 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 10:07:30 crc kubenswrapper[4904]: E0223 10:07:30.081111 4904 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:30 crc kubenswrapper[4904]: E0223 10:07:30.081047 4904 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 10:07:30 crc kubenswrapper[4904]: E0223 10:07:30.081188 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 10:07:38.081164577 +0000 UTC m=+91.501538120 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:30 crc kubenswrapper[4904]: E0223 10:07:30.081308 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 10:07:38.08128059 +0000 UTC m=+91.501654143 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.163771 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.163807 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.163816 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.163830 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.163841 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:30Z","lastTransitionTime":"2026-02-23T10:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.250933 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 16:20:03.070518638 +0000 UTC Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.254223 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.254277 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:30 crc kubenswrapper[4904]: E0223 10:07:30.254391 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.254469 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:30 crc kubenswrapper[4904]: E0223 10:07:30.254672 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:07:30 crc kubenswrapper[4904]: E0223 10:07:30.254797 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.266551 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.266587 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.266596 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.266610 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.266619 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:30Z","lastTransitionTime":"2026-02-23T10:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.266773 4904 scope.go:117] "RemoveContainer" containerID="b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f" Feb 23 10:07:30 crc kubenswrapper[4904]: E0223 10:07:30.267076 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.268609 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.369617 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.369685 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.369702 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.369792 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.369811 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:30Z","lastTransitionTime":"2026-02-23T10:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.472971 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.473266 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.473514 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.473733 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.473854 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:30Z","lastTransitionTime":"2026-02-23T10:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.572810 4904 scope.go:117] "RemoveContainer" containerID="b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f" Feb 23 10:07:30 crc kubenswrapper[4904]: E0223 10:07:30.573325 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.576082 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.576126 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.576136 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.576153 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.576166 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:30Z","lastTransitionTime":"2026-02-23T10:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.678217 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.678256 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.678266 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.678281 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.678293 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:30Z","lastTransitionTime":"2026-02-23T10:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.781450 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.781497 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.781512 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.781542 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.781561 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:30Z","lastTransitionTime":"2026-02-23T10:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.883876 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.883920 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.883931 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.883948 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.883959 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:30Z","lastTransitionTime":"2026-02-23T10:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.986966 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.987025 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.987044 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.987068 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:30 crc kubenswrapper[4904]: I0223 10:07:30.987088 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:30Z","lastTransitionTime":"2026-02-23T10:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.094130 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.094225 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.094245 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.094606 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.094650 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:31Z","lastTransitionTime":"2026-02-23T10:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.197354 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.197386 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.197396 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.197410 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.197420 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:31Z","lastTransitionTime":"2026-02-23T10:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.252117 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 20:31:57.263417551 +0000 UTC Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.299599 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.299673 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.299690 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.299760 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.299788 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:31Z","lastTransitionTime":"2026-02-23T10:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.402262 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.402300 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.402309 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.402324 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.402333 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:31Z","lastTransitionTime":"2026-02-23T10:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.504911 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.504943 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.504952 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.504964 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.504973 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:31Z","lastTransitionTime":"2026-02-23T10:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.607657 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.607701 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.607735 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.607758 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.607769 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:31Z","lastTransitionTime":"2026-02-23T10:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.710105 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.710138 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.710149 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.710165 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.710178 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:31Z","lastTransitionTime":"2026-02-23T10:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.812474 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.812530 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.812554 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.812572 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.812585 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:31Z","lastTransitionTime":"2026-02-23T10:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.915444 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.915492 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.915507 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.915528 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:31 crc kubenswrapper[4904]: I0223 10:07:31.915544 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:31Z","lastTransitionTime":"2026-02-23T10:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.018525 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.018584 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.018605 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.018634 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.018658 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:32Z","lastTransitionTime":"2026-02-23T10:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.121052 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.121082 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.121108 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.121120 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.121128 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:32Z","lastTransitionTime":"2026-02-23T10:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.223364 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.223392 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.223400 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.223412 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.223422 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:32Z","lastTransitionTime":"2026-02-23T10:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.253110 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 03:16:25.246792364 +0000 UTC Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.254666 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:32 crc kubenswrapper[4904]: E0223 10:07:32.254792 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.254839 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.254870 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:32 crc kubenswrapper[4904]: E0223 10:07:32.254898 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:07:32 crc kubenswrapper[4904]: E0223 10:07:32.254943 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.325535 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.325573 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.325585 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.325600 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.325610 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:32Z","lastTransitionTime":"2026-02-23T10:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.428106 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.428157 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.428175 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.428208 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.428223 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:32Z","lastTransitionTime":"2026-02-23T10:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.530236 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.530273 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.530288 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.530312 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.530322 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:32Z","lastTransitionTime":"2026-02-23T10:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.632703 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.632936 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.633036 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.633113 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.633172 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:32Z","lastTransitionTime":"2026-02-23T10:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.735062 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.735117 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.735127 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.735142 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.735152 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:32Z","lastTransitionTime":"2026-02-23T10:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.837620 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.837856 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.837931 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.837998 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.838053 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:32Z","lastTransitionTime":"2026-02-23T10:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.940532 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.940602 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.940621 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.940645 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.940662 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:32Z","lastTransitionTime":"2026-02-23T10:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.942055 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.942096 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.942109 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.942125 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.942136 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:32Z","lastTransitionTime":"2026-02-23T10:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:32 crc kubenswrapper[4904]: E0223 10:07:32.953892 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:32Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.956799 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.956851 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.956862 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.956881 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.956892 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:32Z","lastTransitionTime":"2026-02-23T10:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:32 crc kubenswrapper[4904]: E0223 10:07:32.967619 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:32Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.971590 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.971630 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.971642 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.971660 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.971672 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:32Z","lastTransitionTime":"2026-02-23T10:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:32 crc kubenswrapper[4904]: E0223 10:07:32.982559 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:32Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.985904 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.985939 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.985949 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.985967 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:32 crc kubenswrapper[4904]: I0223 10:07:32.985979 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:32Z","lastTransitionTime":"2026-02-23T10:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:32 crc kubenswrapper[4904]: E0223 10:07:32.996632 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:32Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.001462 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.001508 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.001518 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.001538 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.001550 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:33Z","lastTransitionTime":"2026-02-23T10:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:33 crc kubenswrapper[4904]: E0223 10:07:33.014216 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:33Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:33 crc kubenswrapper[4904]: E0223 10:07:33.014409 4904 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.043589 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.043638 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.043651 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.043673 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.043685 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:33Z","lastTransitionTime":"2026-02-23T10:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.145768 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.145941 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.145963 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.145989 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.146005 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:33Z","lastTransitionTime":"2026-02-23T10:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.248988 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.249037 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.249049 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.249067 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.249079 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:33Z","lastTransitionTime":"2026-02-23T10:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.254335 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 16:38:34.040211156 +0000 UTC Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.351343 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.351380 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.351388 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.351402 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.351414 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:33Z","lastTransitionTime":"2026-02-23T10:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.453154 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.453195 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.453206 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.453218 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.453228 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:33Z","lastTransitionTime":"2026-02-23T10:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.555775 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.555831 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.555843 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.555857 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:33 crc kubenswrapper[4904]: I0223 10:07:33.555868 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:33Z","lastTransitionTime":"2026-02-23T10:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.367226 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 12:57:38.54054571 +0000 UTC Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.368167 4904 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.368207 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:35 crc kubenswrapper[4904]: E0223 10:07:35.368353 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.368370 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:35 crc kubenswrapper[4904]: E0223 10:07:35.368507 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.370032 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:35 crc kubenswrapper[4904]: E0223 10:07:35.370123 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.371391 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.371436 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.371447 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.371463 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.371474 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:35Z","lastTransitionTime":"2026-02-23T10:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.376400 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.474800 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.474849 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.474858 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.474871 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.474880 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:35Z","lastTransitionTime":"2026-02-23T10:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.577783 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.577832 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.577840 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.577853 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.577862 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:35Z","lastTransitionTime":"2026-02-23T10:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.680265 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.680294 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.680304 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.680320 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.680330 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:35Z","lastTransitionTime":"2026-02-23T10:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.782407 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.782431 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.782439 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.782450 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.782459 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:35Z","lastTransitionTime":"2026-02-23T10:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.884493 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.884521 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.884530 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.884542 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.884551 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:35Z","lastTransitionTime":"2026-02-23T10:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.987376 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.987831 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.987846 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.987861 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:35 crc kubenswrapper[4904]: I0223 10:07:35.987871 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:35Z","lastTransitionTime":"2026-02-23T10:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.090628 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.090670 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.090678 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.090691 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.090700 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:36Z","lastTransitionTime":"2026-02-23T10:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.193036 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.193075 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.193084 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.193098 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.193107 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:36Z","lastTransitionTime":"2026-02-23T10:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.296331 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.296389 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.296409 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.296435 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.296456 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:36Z","lastTransitionTime":"2026-02-23T10:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.367809 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 12:03:02.410647538 +0000 UTC Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.398990 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.399035 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.399046 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.399062 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.399076 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:36Z","lastTransitionTime":"2026-02-23T10:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.500828 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.500862 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.500872 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.500917 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.500932 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:36Z","lastTransitionTime":"2026-02-23T10:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.603937 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.603984 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.604004 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.604027 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.604043 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:36Z","lastTransitionTime":"2026-02-23T10:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.705764 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.705798 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.705807 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.705820 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.705831 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:36Z","lastTransitionTime":"2026-02-23T10:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.807799 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.807859 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.807870 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.807884 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.807892 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:36Z","lastTransitionTime":"2026-02-23T10:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.909610 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.909650 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.909658 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.909673 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:36 crc kubenswrapper[4904]: I0223 10:07:36.909682 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:36Z","lastTransitionTime":"2026-02-23T10:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.012007 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.012063 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.012074 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.012091 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.012103 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:37Z","lastTransitionTime":"2026-02-23T10:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.114332 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.114573 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.114730 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.114825 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.114927 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:37Z","lastTransitionTime":"2026-02-23T10:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.217728 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.217772 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.217781 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.217796 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.217805 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:37Z","lastTransitionTime":"2026-02-23T10:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.255289 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:37 crc kubenswrapper[4904]: E0223 10:07:37.255432 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.255315 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:37 crc kubenswrapper[4904]: E0223 10:07:37.255627 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.255776 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:37 crc kubenswrapper[4904]: E0223 10:07:37.255925 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.271033 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.284128 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.301093 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.316179 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.319693 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.319800 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.319813 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.319829 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.319842 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:37Z","lastTransitionTime":"2026-02-23T10:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.331962 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.347212 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.363593 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.368418 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 06:10:16.047199893 +0000 UTC Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.379446 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.422315 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.422361 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.422380 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.422403 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.422420 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:37Z","lastTransitionTime":"2026-02-23T10:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.524431 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.524483 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.524494 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.524510 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.524523 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:37Z","lastTransitionTime":"2026-02-23T10:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.627517 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.627595 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.627615 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.627644 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.627664 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:37Z","lastTransitionTime":"2026-02-23T10:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.731368 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.731439 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.731456 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.731482 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.731503 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:37Z","lastTransitionTime":"2026-02-23T10:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.835267 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.835351 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.835375 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.835405 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.835428 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:37Z","lastTransitionTime":"2026-02-23T10:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.938127 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.938177 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.938189 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.938204 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.938215 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:37Z","lastTransitionTime":"2026-02-23T10:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.986434 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:07:37 crc kubenswrapper[4904]: I0223 10:07:37.986491 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:37 crc kubenswrapper[4904]: E0223 10:07:37.986600 4904 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 10:07:37 crc kubenswrapper[4904]: E0223 10:07:37.986642 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 10:07:53.9866298 +0000 UTC m=+107.407003313 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 10:07:37 crc kubenswrapper[4904]: E0223 10:07:37.986978 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:07:53.986948978 +0000 UTC m=+107.407322531 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.040545 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.040610 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.040623 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.040637 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.040645 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:38Z","lastTransitionTime":"2026-02-23T10:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.087849 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.087933 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.087973 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:38 crc kubenswrapper[4904]: E0223 10:07:38.088135 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 10:07:38 crc kubenswrapper[4904]: E0223 10:07:38.088166 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 10:07:38 crc kubenswrapper[4904]: E0223 10:07:38.088189 4904 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:38 crc kubenswrapper[4904]: E0223 10:07:38.088264 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 10:07:54.088240811 +0000 UTC m=+107.508614364 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:38 crc kubenswrapper[4904]: E0223 10:07:38.088797 4904 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 10:07:38 crc kubenswrapper[4904]: E0223 10:07:38.088834 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 10:07:54.088824897 +0000 UTC m=+107.509198410 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 10:07:38 crc kubenswrapper[4904]: E0223 10:07:38.088941 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 10:07:38 crc kubenswrapper[4904]: E0223 10:07:38.088979 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 10:07:38 crc kubenswrapper[4904]: E0223 10:07:38.088990 4904 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:38 crc kubenswrapper[4904]: E0223 10:07:38.089044 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 10:07:54.089028853 +0000 UTC m=+107.509402366 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.143021 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.143087 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.143104 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.143122 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.143135 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:38Z","lastTransitionTime":"2026-02-23T10:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.246022 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.246059 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.246068 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.246082 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.246092 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:38Z","lastTransitionTime":"2026-02-23T10:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.349161 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.349226 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.349250 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.349278 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.349303 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:38Z","lastTransitionTime":"2026-02-23T10:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.368920 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 09:41:58.388486149 +0000 UTC Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.451732 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.451773 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.451789 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.451805 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.451815 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:38Z","lastTransitionTime":"2026-02-23T10:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.554459 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.554526 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.554545 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.554570 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.554589 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:38Z","lastTransitionTime":"2026-02-23T10:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.657457 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.657531 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.657550 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.657579 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.657598 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:38Z","lastTransitionTime":"2026-02-23T10:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.761597 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.761640 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.761648 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.761663 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.761673 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:38Z","lastTransitionTime":"2026-02-23T10:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.864795 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.864833 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.864844 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.864859 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.864869 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:38Z","lastTransitionTime":"2026-02-23T10:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.967487 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.967529 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.967548 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.967567 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:38 crc kubenswrapper[4904]: I0223 10:07:38.967579 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:38Z","lastTransitionTime":"2026-02-23T10:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.071157 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.071191 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.071203 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.071226 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.071238 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:39Z","lastTransitionTime":"2026-02-23T10:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.173763 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.173805 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.173815 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.173829 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.173839 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:39Z","lastTransitionTime":"2026-02-23T10:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.255996 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.256051 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.256088 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:39 crc kubenswrapper[4904]: E0223 10:07:39.256238 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:07:39 crc kubenswrapper[4904]: E0223 10:07:39.256333 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:07:39 crc kubenswrapper[4904]: E0223 10:07:39.256428 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.276451 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.276500 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.276519 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.276540 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.276555 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:39Z","lastTransitionTime":"2026-02-23T10:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.369601 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 15:28:41.239514428 +0000 UTC Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.378294 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.378337 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.378348 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.378380 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.378391 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:39Z","lastTransitionTime":"2026-02-23T10:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.481075 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.481117 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.481128 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.481145 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.481157 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:39Z","lastTransitionTime":"2026-02-23T10:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.583270 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.583321 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.583336 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.583382 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.583398 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:39Z","lastTransitionTime":"2026-02-23T10:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.685302 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.685344 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.685354 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.685367 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.685378 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:39Z","lastTransitionTime":"2026-02-23T10:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.787195 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.787474 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.787557 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.787657 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.787770 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:39Z","lastTransitionTime":"2026-02-23T10:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.890314 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.890357 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.890367 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.890382 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.890392 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:39Z","lastTransitionTime":"2026-02-23T10:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.992229 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.992494 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.992629 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.992754 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:39 crc kubenswrapper[4904]: I0223 10:07:39.992858 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:39Z","lastTransitionTime":"2026-02-23T10:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.095470 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.095878 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.095984 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.096134 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.096280 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:40Z","lastTransitionTime":"2026-02-23T10:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.199042 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.199113 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.199131 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.199162 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.199181 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:40Z","lastTransitionTime":"2026-02-23T10:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.302185 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.302262 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.302282 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.302311 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.302332 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:40Z","lastTransitionTime":"2026-02-23T10:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.369999 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 15:02:23.679477221 +0000 UTC Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.405806 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.406197 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.406467 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.406639 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.406803 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:40Z","lastTransitionTime":"2026-02-23T10:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.510397 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.510799 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.510960 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.511113 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.511237 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:40Z","lastTransitionTime":"2026-02-23T10:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.614178 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.614231 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.614241 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.614286 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.614297 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:40Z","lastTransitionTime":"2026-02-23T10:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.716529 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.716591 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.716602 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.716622 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.716635 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:40Z","lastTransitionTime":"2026-02-23T10:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.819342 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.819393 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.819403 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.819420 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.819429 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:40Z","lastTransitionTime":"2026-02-23T10:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.921413 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.921466 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.921479 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.921499 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:40 crc kubenswrapper[4904]: I0223 10:07:40.921514 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:40Z","lastTransitionTime":"2026-02-23T10:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.023100 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.023136 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.023147 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.023165 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.023176 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:41Z","lastTransitionTime":"2026-02-23T10:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.126375 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.126471 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.126497 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.126535 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.126632 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:41Z","lastTransitionTime":"2026-02-23T10:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.228999 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.229044 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.229054 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.229073 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.229082 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:41Z","lastTransitionTime":"2026-02-23T10:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.254409 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.254485 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.254518 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:41 crc kubenswrapper[4904]: E0223 10:07:41.254639 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:07:41 crc kubenswrapper[4904]: E0223 10:07:41.254742 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:07:41 crc kubenswrapper[4904]: E0223 10:07:41.254815 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.330957 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.331024 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.331058 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.331077 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.331091 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:41Z","lastTransitionTime":"2026-02-23T10:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.370483 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 02:14:04.224524545 +0000 UTC Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.434992 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.435529 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.436019 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.436472 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.436889 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:41Z","lastTransitionTime":"2026-02-23T10:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.540978 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.541439 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.541583 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.541753 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.541921 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:41Z","lastTransitionTime":"2026-02-23T10:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.646091 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.646459 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.646568 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.646773 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.646922 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:41Z","lastTransitionTime":"2026-02-23T10:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.751059 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.751193 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.751217 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.751253 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.751280 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:41Z","lastTransitionTime":"2026-02-23T10:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.854858 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.854921 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.854938 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.854963 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.854978 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:41Z","lastTransitionTime":"2026-02-23T10:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.958064 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.958117 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.958133 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.958158 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:41 crc kubenswrapper[4904]: I0223 10:07:41.958175 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:41Z","lastTransitionTime":"2026-02-23T10:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.060501 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.060591 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.060600 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.060614 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.060623 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:42Z","lastTransitionTime":"2026-02-23T10:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.163986 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.164042 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.164054 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.164076 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.164090 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:42Z","lastTransitionTime":"2026-02-23T10:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.267923 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.267994 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.268002 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.268016 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.268026 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:42Z","lastTransitionTime":"2026-02-23T10:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.370546 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.370596 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.370608 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.370629 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.370643 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:42Z","lastTransitionTime":"2026-02-23T10:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.370673 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 09:58:33.272982456 +0000 UTC Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.475130 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.475246 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.475277 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.475314 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.475335 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:42Z","lastTransitionTime":"2026-02-23T10:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.578100 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.578155 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.578165 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.578184 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.578196 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:42Z","lastTransitionTime":"2026-02-23T10:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.680555 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.680609 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.680623 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.680643 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.680658 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:42Z","lastTransitionTime":"2026-02-23T10:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.783641 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.783689 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.783699 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.783730 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.783743 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:42Z","lastTransitionTime":"2026-02-23T10:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.886959 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.887009 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.887041 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.887062 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.887078 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:42Z","lastTransitionTime":"2026-02-23T10:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.989365 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.989449 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.989463 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.989479 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:42 crc kubenswrapper[4904]: I0223 10:07:42.989513 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:42Z","lastTransitionTime":"2026-02-23T10:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.092597 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.092688 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.092709 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.092773 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.092799 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:43Z","lastTransitionTime":"2026-02-23T10:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.094867 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.094994 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.095007 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.095027 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.095062 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:43Z","lastTransitionTime":"2026-02-23T10:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:43 crc kubenswrapper[4904]: E0223 10:07:43.106546 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:43Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.111522 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.111555 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.111569 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.111585 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.111597 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:43Z","lastTransitionTime":"2026-02-23T10:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:43 crc kubenswrapper[4904]: E0223 10:07:43.126963 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:43Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.131470 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.131498 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.131506 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.131518 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.131528 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:43Z","lastTransitionTime":"2026-02-23T10:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:43 crc kubenswrapper[4904]: E0223 10:07:43.142237 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:43Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.145229 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.145249 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.145257 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.145268 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.145276 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:43Z","lastTransitionTime":"2026-02-23T10:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:43 crc kubenswrapper[4904]: E0223 10:07:43.156544 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:43Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.160537 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.160560 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.160568 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.160578 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.160586 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:43Z","lastTransitionTime":"2026-02-23T10:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:43 crc kubenswrapper[4904]: E0223 10:07:43.173638 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:43Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:43 crc kubenswrapper[4904]: E0223 10:07:43.173787 4904 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.198873 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.199277 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.199356 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.199383 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.199402 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:43Z","lastTransitionTime":"2026-02-23T10:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.254738 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.254826 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:43 crc kubenswrapper[4904]: E0223 10:07:43.254951 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.255112 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:43 crc kubenswrapper[4904]: E0223 10:07:43.255208 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:07:43 crc kubenswrapper[4904]: E0223 10:07:43.255443 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.301917 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.301949 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.301957 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.301972 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.301981 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:43Z","lastTransitionTime":"2026-02-23T10:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.371393 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 08:08:42.329206324 +0000 UTC Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.404685 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.404787 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.404804 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.404825 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.404838 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:43Z","lastTransitionTime":"2026-02-23T10:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.507976 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.508013 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.508022 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.508035 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.508044 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:43Z","lastTransitionTime":"2026-02-23T10:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.611347 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.611449 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.611469 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.611504 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.611525 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:43Z","lastTransitionTime":"2026-02-23T10:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.714101 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.714147 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.714158 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.714193 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.714204 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:43Z","lastTransitionTime":"2026-02-23T10:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.817788 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.817844 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.817853 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.817871 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.817880 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:43Z","lastTransitionTime":"2026-02-23T10:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.920815 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.920896 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.920909 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.920951 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:43 crc kubenswrapper[4904]: I0223 10:07:43.920966 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:43Z","lastTransitionTime":"2026-02-23T10:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.024809 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.024899 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.024917 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.024977 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.025015 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:44Z","lastTransitionTime":"2026-02-23T10:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.128042 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.128170 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.128204 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.128242 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.128267 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:44Z","lastTransitionTime":"2026-02-23T10:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.232154 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.232217 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.232237 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.232266 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.232288 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:44Z","lastTransitionTime":"2026-02-23T10:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.256197 4904 scope.go:117] "RemoveContainer" containerID="b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.335296 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.335348 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.335359 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.335375 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.335386 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:44Z","lastTransitionTime":"2026-02-23T10:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.371935 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 04:20:35.09462198 +0000 UTC Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.437446 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.437473 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.437481 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.437495 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.437503 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:44Z","lastTransitionTime":"2026-02-23T10:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.539344 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.539369 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.539377 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.539390 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.539398 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:44Z","lastTransitionTime":"2026-02-23T10:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.641645 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.641881 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.641945 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.642029 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.642101 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:44Z","lastTransitionTime":"2026-02-23T10:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.744437 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.744692 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.744787 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.744858 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.744930 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:44Z","lastTransitionTime":"2026-02-23T10:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.846844 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.847079 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.847175 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.847260 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.847348 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:44Z","lastTransitionTime":"2026-02-23T10:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.950285 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.951311 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.951491 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.951638 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:44 crc kubenswrapper[4904]: I0223 10:07:44.951818 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:44Z","lastTransitionTime":"2026-02-23T10:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.054821 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.055290 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.055509 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.055781 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.055916 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:45Z","lastTransitionTime":"2026-02-23T10:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.158251 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.158885 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.159391 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.159611 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.159684 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:45Z","lastTransitionTime":"2026-02-23T10:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.254595 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.254657 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.254621 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:45 crc kubenswrapper[4904]: E0223 10:07:45.254803 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:07:45 crc kubenswrapper[4904]: E0223 10:07:45.254970 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:07:45 crc kubenswrapper[4904]: E0223 10:07:45.255046 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.262081 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.262282 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.262405 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.262550 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.262697 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:45Z","lastTransitionTime":"2026-02-23T10:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.364810 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.364843 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.364853 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.364867 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.364878 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:45Z","lastTransitionTime":"2026-02-23T10:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.372237 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 03:13:46.809453762 +0000 UTC Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.401516 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.403905 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f"} Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.404214 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.422245 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:45Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.437149 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:45Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.451363 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:45Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.468219 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.468220 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:45Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.468279 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.468406 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.468427 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.468439 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:45Z","lastTransitionTime":"2026-02-23T10:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.491694 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:45Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.511327 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:45Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.525597 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:45Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.538209 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:45Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.571550 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.571582 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.571590 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.571603 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.571612 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:45Z","lastTransitionTime":"2026-02-23T10:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.674006 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.674335 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.674428 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.674531 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.674623 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:45Z","lastTransitionTime":"2026-02-23T10:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.776844 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.776972 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.777007 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.777038 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.777055 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:45Z","lastTransitionTime":"2026-02-23T10:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.880134 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.880192 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.880207 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.880230 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.880247 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:45Z","lastTransitionTime":"2026-02-23T10:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.982700 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.982797 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.982813 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.982833 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:45 crc kubenswrapper[4904]: I0223 10:07:45.982848 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:45Z","lastTransitionTime":"2026-02-23T10:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.085554 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.085594 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.085603 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.085619 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.085628 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:46Z","lastTransitionTime":"2026-02-23T10:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.188874 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.188931 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.188943 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.188962 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.188975 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:46Z","lastTransitionTime":"2026-02-23T10:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.291939 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.291996 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.292016 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.292043 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.292063 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:46Z","lastTransitionTime":"2026-02-23T10:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.373173 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 10:53:38.06362089 +0000 UTC Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.393642 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.393695 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.393736 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.393761 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.393778 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:46Z","lastTransitionTime":"2026-02-23T10:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.495996 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.496036 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.496050 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.496071 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.496082 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:46Z","lastTransitionTime":"2026-02-23T10:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.598104 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.598147 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.598158 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.598175 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.598186 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:46Z","lastTransitionTime":"2026-02-23T10:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.692224 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-b25k6"] Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.692632 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-b25k6" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.695587 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.695781 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.695975 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.700877 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.700919 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.700930 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.700946 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.700958 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:46Z","lastTransitionTime":"2026-02-23T10:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.705346 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:46Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.716388 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:46Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.729366 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:46Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.742907 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:46Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.755044 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:46Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.764841 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:46Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.766118 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6d93217b-ff9a-4324-8f62-8a0c0034367a-hosts-file\") pod \"node-resolver-b25k6\" (UID: \"6d93217b-ff9a-4324-8f62-8a0c0034367a\") " pod="openshift-dns/node-resolver-b25k6" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.766187 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnzg7\" (UniqueName: \"kubernetes.io/projected/6d93217b-ff9a-4324-8f62-8a0c0034367a-kube-api-access-hnzg7\") pod \"node-resolver-b25k6\" (UID: \"6d93217b-ff9a-4324-8f62-8a0c0034367a\") " pod="openshift-dns/node-resolver-b25k6" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.777936 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:46Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.788614 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:46Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.799768 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:46Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.803360 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.803402 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.803413 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.803428 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.803439 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:46Z","lastTransitionTime":"2026-02-23T10:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.867020 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnzg7\" (UniqueName: \"kubernetes.io/projected/6d93217b-ff9a-4324-8f62-8a0c0034367a-kube-api-access-hnzg7\") pod \"node-resolver-b25k6\" (UID: \"6d93217b-ff9a-4324-8f62-8a0c0034367a\") " pod="openshift-dns/node-resolver-b25k6" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.867114 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6d93217b-ff9a-4324-8f62-8a0c0034367a-hosts-file\") pod \"node-resolver-b25k6\" (UID: \"6d93217b-ff9a-4324-8f62-8a0c0034367a\") " pod="openshift-dns/node-resolver-b25k6" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.867181 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6d93217b-ff9a-4324-8f62-8a0c0034367a-hosts-file\") pod \"node-resolver-b25k6\" (UID: \"6d93217b-ff9a-4324-8f62-8a0c0034367a\") " pod="openshift-dns/node-resolver-b25k6" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.891652 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnzg7\" (UniqueName: \"kubernetes.io/projected/6d93217b-ff9a-4324-8f62-8a0c0034367a-kube-api-access-hnzg7\") pod \"node-resolver-b25k6\" (UID: \"6d93217b-ff9a-4324-8f62-8a0c0034367a\") " pod="openshift-dns/node-resolver-b25k6" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.905710 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.905762 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.905771 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.905784 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:46 crc kubenswrapper[4904]: I0223 10:07:46.905794 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:46Z","lastTransitionTime":"2026-02-23T10:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.007053 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-b25k6" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.007992 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.008034 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.008046 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.008095 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.008117 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:47Z","lastTransitionTime":"2026-02-23T10:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.060230 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-h4l4k"] Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.060753 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-fm2n2"] Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.060866 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-mv4jm"] Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.061323 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.061591 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.061984 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.064104 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.065020 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.065252 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.065418 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.065549 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.066848 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.067044 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.067211 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.067367 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.067811 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.067872 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.068109 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.082497 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.093267 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.104685 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.112276 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.112342 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.112354 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.112373 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.112384 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:47Z","lastTransitionTime":"2026-02-23T10:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.115251 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.127786 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.140038 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.150250 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.160365 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169157 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-system-cni-dir\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169196 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-os-release\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169214 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-host-var-lib-kubelet\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169229 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-hostroot\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169267 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-multus-daemon-config\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169535 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6f16e2e7-7479-4c7d-9582-00896887abcc-cni-binary-copy\") pod \"multus-additional-cni-plugins-mv4jm\" (UID: \"6f16e2e7-7479-4c7d-9582-00896887abcc\") " pod="openshift-multus/multus-additional-cni-plugins-mv4jm" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169569 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6f16e2e7-7479-4c7d-9582-00896887abcc-system-cni-dir\") pod \"multus-additional-cni-plugins-mv4jm\" (UID: \"6f16e2e7-7479-4c7d-9582-00896887abcc\") " pod="openshift-multus/multus-additional-cni-plugins-mv4jm" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169583 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6f16e2e7-7479-4c7d-9582-00896887abcc-os-release\") pod \"multus-additional-cni-plugins-mv4jm\" (UID: \"6f16e2e7-7479-4c7d-9582-00896887abcc\") " pod="openshift-multus/multus-additional-cni-plugins-mv4jm" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169598 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-etc-kubernetes\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169615 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/91cb76d8-4bf9-49e5-b51a-c55794ba0cec-rootfs\") pod \"machine-config-daemon-h4l4k\" (UID: \"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\") " pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169629 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-cnibin\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169643 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6f16e2e7-7479-4c7d-9582-00896887abcc-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mv4jm\" (UID: \"6f16e2e7-7479-4c7d-9582-00896887abcc\") " pod="openshift-multus/multus-additional-cni-plugins-mv4jm" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169658 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/91cb76d8-4bf9-49e5-b51a-c55794ba0cec-mcd-auth-proxy-config\") pod \"machine-config-daemon-h4l4k\" (UID: \"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\") " pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169672 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6f16e2e7-7479-4c7d-9582-00896887abcc-cnibin\") pod \"multus-additional-cni-plugins-mv4jm\" (UID: \"6f16e2e7-7479-4c7d-9582-00896887abcc\") " pod="openshift-multus/multus-additional-cni-plugins-mv4jm" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169722 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/91cb76d8-4bf9-49e5-b51a-c55794ba0cec-proxy-tls\") pod \"machine-config-daemon-h4l4k\" (UID: \"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\") " pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169755 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-multus-socket-dir-parent\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169770 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-multus-conf-dir\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169787 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-host-run-k8s-cni-cncf-io\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169802 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-host-var-lib-cni-bin\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169816 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gcn8\" (UniqueName: \"kubernetes.io/projected/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-kube-api-access-4gcn8\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169830 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-multus-cni-dir\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169846 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-cni-binary-copy\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169861 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pk55\" (UniqueName: \"kubernetes.io/projected/6f16e2e7-7479-4c7d-9582-00896887abcc-kube-api-access-8pk55\") pod \"multus-additional-cni-plugins-mv4jm\" (UID: \"6f16e2e7-7479-4c7d-9582-00896887abcc\") " pod="openshift-multus/multus-additional-cni-plugins-mv4jm" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169877 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-host-var-lib-cni-multus\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169891 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6f16e2e7-7479-4c7d-9582-00896887abcc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mv4jm\" (UID: \"6f16e2e7-7479-4c7d-9582-00896887abcc\") " pod="openshift-multus/multus-additional-cni-plugins-mv4jm" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169906 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-host-run-netns\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169920 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9kpf\" (UniqueName: \"kubernetes.io/projected/91cb76d8-4bf9-49e5-b51a-c55794ba0cec-kube-api-access-d9kpf\") pod \"machine-config-daemon-h4l4k\" (UID: \"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\") " pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169943 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-host-run-multus-certs\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.169949 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.181070 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.191996 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.206882 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.216240 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.216277 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.216287 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.216299 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.216309 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:47Z","lastTransitionTime":"2026-02-23T10:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.218010 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.227919 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.238139 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.251723 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.254320 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:47 crc kubenswrapper[4904]: E0223 10:07:47.254409 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.254491 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.254497 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:47 crc kubenswrapper[4904]: E0223 10:07:47.254638 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:07:47 crc kubenswrapper[4904]: E0223 10:07:47.254755 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.264569 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271105 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/91cb76d8-4bf9-49e5-b51a-c55794ba0cec-mcd-auth-proxy-config\") pod \"machine-config-daemon-h4l4k\" (UID: \"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\") " pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271138 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6f16e2e7-7479-4c7d-9582-00896887abcc-cnibin\") pod \"multus-additional-cni-plugins-mv4jm\" (UID: \"6f16e2e7-7479-4c7d-9582-00896887abcc\") " pod="openshift-multus/multus-additional-cni-plugins-mv4jm" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271159 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/91cb76d8-4bf9-49e5-b51a-c55794ba0cec-proxy-tls\") pod \"machine-config-daemon-h4l4k\" (UID: \"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\") " pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271174 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-multus-socket-dir-parent\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271191 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-multus-conf-dir\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271498 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-host-run-k8s-cni-cncf-io\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271522 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-host-var-lib-cni-bin\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271542 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gcn8\" (UniqueName: \"kubernetes.io/projected/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-kube-api-access-4gcn8\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271557 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-multus-cni-dir\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271570 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-cni-binary-copy\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271587 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pk55\" (UniqueName: \"kubernetes.io/projected/6f16e2e7-7479-4c7d-9582-00896887abcc-kube-api-access-8pk55\") pod \"multus-additional-cni-plugins-mv4jm\" (UID: \"6f16e2e7-7479-4c7d-9582-00896887abcc\") " pod="openshift-multus/multus-additional-cni-plugins-mv4jm" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271603 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-host-var-lib-cni-multus\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271617 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6f16e2e7-7479-4c7d-9582-00896887abcc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mv4jm\" (UID: \"6f16e2e7-7479-4c7d-9582-00896887abcc\") " pod="openshift-multus/multus-additional-cni-plugins-mv4jm" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271631 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-host-run-netns\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271644 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9kpf\" (UniqueName: \"kubernetes.io/projected/91cb76d8-4bf9-49e5-b51a-c55794ba0cec-kube-api-access-d9kpf\") pod \"machine-config-daemon-h4l4k\" (UID: \"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\") " pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271666 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-host-run-multus-certs\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271689 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-system-cni-dir\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271703 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-os-release\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271748 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-host-var-lib-kubelet\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271769 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-hostroot\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271796 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-multus-daemon-config\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271810 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6f16e2e7-7479-4c7d-9582-00896887abcc-cni-binary-copy\") pod \"multus-additional-cni-plugins-mv4jm\" (UID: \"6f16e2e7-7479-4c7d-9582-00896887abcc\") " pod="openshift-multus/multus-additional-cni-plugins-mv4jm" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271825 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6f16e2e7-7479-4c7d-9582-00896887abcc-system-cni-dir\") pod \"multus-additional-cni-plugins-mv4jm\" (UID: \"6f16e2e7-7479-4c7d-9582-00896887abcc\") " pod="openshift-multus/multus-additional-cni-plugins-mv4jm" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271843 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6f16e2e7-7479-4c7d-9582-00896887abcc-os-release\") pod \"multus-additional-cni-plugins-mv4jm\" (UID: \"6f16e2e7-7479-4c7d-9582-00896887abcc\") " pod="openshift-multus/multus-additional-cni-plugins-mv4jm" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271858 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-etc-kubernetes\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271871 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/91cb76d8-4bf9-49e5-b51a-c55794ba0cec-rootfs\") pod \"machine-config-daemon-h4l4k\" (UID: \"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\") " pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271884 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-cnibin\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.271897 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6f16e2e7-7479-4c7d-9582-00896887abcc-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mv4jm\" (UID: \"6f16e2e7-7479-4c7d-9582-00896887abcc\") " pod="openshift-multus/multus-additional-cni-plugins-mv4jm" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.272227 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-host-run-netns\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.272247 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-system-cni-dir\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.272271 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-host-var-lib-kubelet\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.272310 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-host-run-multus-certs\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.272341 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-hostroot\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.272372 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6f16e2e7-7479-4c7d-9582-00896887abcc-os-release\") pod \"multus-additional-cni-plugins-mv4jm\" (UID: \"6f16e2e7-7479-4c7d-9582-00896887abcc\") " pod="openshift-multus/multus-additional-cni-plugins-mv4jm" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.272478 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-os-release\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.272511 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-host-var-lib-cni-bin\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.272534 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6f16e2e7-7479-4c7d-9582-00896887abcc-cnibin\") pod \"multus-additional-cni-plugins-mv4jm\" (UID: \"6f16e2e7-7479-4c7d-9582-00896887abcc\") " pod="openshift-multus/multus-additional-cni-plugins-mv4jm" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.272533 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6f16e2e7-7479-4c7d-9582-00896887abcc-system-cni-dir\") pod \"multus-additional-cni-plugins-mv4jm\" (UID: \"6f16e2e7-7479-4c7d-9582-00896887abcc\") " pod="openshift-multus/multus-additional-cni-plugins-mv4jm" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.272567 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/91cb76d8-4bf9-49e5-b51a-c55794ba0cec-rootfs\") pod \"machine-config-daemon-h4l4k\" (UID: \"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\") " pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.272610 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-etc-kubernetes\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.272640 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-multus-conf-dir\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.272697 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-multus-socket-dir-parent\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.273048 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-multus-daemon-config\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.273062 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6f16e2e7-7479-4c7d-9582-00896887abcc-cni-binary-copy\") pod \"multus-additional-cni-plugins-mv4jm\" (UID: \"6f16e2e7-7479-4c7d-9582-00896887abcc\") " pod="openshift-multus/multus-additional-cni-plugins-mv4jm" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.273098 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-cnibin\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.273114 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-host-var-lib-cni-multus\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.273145 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-host-run-k8s-cni-cncf-io\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.273303 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-multus-cni-dir\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.273723 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6f16e2e7-7479-4c7d-9582-00896887abcc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mv4jm\" (UID: \"6f16e2e7-7479-4c7d-9582-00896887abcc\") " pod="openshift-multus/multus-additional-cni-plugins-mv4jm" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.273744 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/91cb76d8-4bf9-49e5-b51a-c55794ba0cec-mcd-auth-proxy-config\") pod \"machine-config-daemon-h4l4k\" (UID: \"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\") " pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.274034 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6f16e2e7-7479-4c7d-9582-00896887abcc-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mv4jm\" (UID: \"6f16e2e7-7479-4c7d-9582-00896887abcc\") " pod="openshift-multus/multus-additional-cni-plugins-mv4jm" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.275186 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-cni-binary-copy\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.278072 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.278190 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/91cb76d8-4bf9-49e5-b51a-c55794ba0cec-proxy-tls\") pod \"machine-config-daemon-h4l4k\" (UID: \"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\") " pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.290691 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.290934 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9kpf\" (UniqueName: \"kubernetes.io/projected/91cb76d8-4bf9-49e5-b51a-c55794ba0cec-kube-api-access-d9kpf\") pod \"machine-config-daemon-h4l4k\" (UID: \"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\") " pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.292833 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pk55\" (UniqueName: \"kubernetes.io/projected/6f16e2e7-7479-4c7d-9582-00896887abcc-kube-api-access-8pk55\") pod \"multus-additional-cni-plugins-mv4jm\" (UID: \"6f16e2e7-7479-4c7d-9582-00896887abcc\") " pod="openshift-multus/multus-additional-cni-plugins-mv4jm" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.293399 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gcn8\" (UniqueName: \"kubernetes.io/projected/65ad73a3-cf4b-49ec-b994-2d52cb43bc76-kube-api-access-4gcn8\") pod \"multus-fm2n2\" (UID: \"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\") " pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.302023 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.311923 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.317788 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.317813 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.317821 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.317834 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.317843 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:47Z","lastTransitionTime":"2026-02-23T10:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.324906 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.333567 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.342179 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.352043 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.373482 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 23:25:51.061319199 +0000 UTC Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.380352 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.384491 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" Feb 23 10:07:47 crc kubenswrapper[4904]: W0223 10:07:47.394193 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f16e2e7_7479_4c7d_9582_00896887abcc.slice/crio-3b2ea1db2906b55fc51069354012b65772abfb98c58ac20e75a54cd5d5a70dad WatchSource:0}: Error finding container 3b2ea1db2906b55fc51069354012b65772abfb98c58ac20e75a54cd5d5a70dad: Status 404 returned error can't find the container with id 3b2ea1db2906b55fc51069354012b65772abfb98c58ac20e75a54cd5d5a70dad Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.397125 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.402526 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.406857 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fm2n2" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.419659 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.427639 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9h7jb"] Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.428504 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.429903 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.430778 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" event={"ID":"6f16e2e7-7479-4c7d-9582-00896887abcc","Type":"ContainerStarted","Data":"3b2ea1db2906b55fc51069354012b65772abfb98c58ac20e75a54cd5d5a70dad"} Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.430806 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.430846 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.430858 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.430872 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.430885 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:47Z","lastTransitionTime":"2026-02-23T10:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.431942 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.432068 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.432223 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.432282 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.432548 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-b25k6" event={"ID":"6d93217b-ff9a-4324-8f62-8a0c0034367a","Type":"ContainerStarted","Data":"11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a"} Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.432613 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-b25k6" event={"ID":"6d93217b-ff9a-4324-8f62-8a0c0034367a","Type":"ContainerStarted","Data":"507c36390056be0663b667033719b720cce19a946165e927d5262b54d2717947"} Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.433314 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.433373 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.435655 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: W0223 10:07:47.447307 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65ad73a3_cf4b_49ec_b994_2d52cb43bc76.slice/crio-6b7a865d2d0bbeaf2407152b5dd66d6fc9c4b0015383c8cbb1e9b7a5852585f8 WatchSource:0}: Error finding container 6b7a865d2d0bbeaf2407152b5dd66d6fc9c4b0015383c8cbb1e9b7a5852585f8: Status 404 returned error can't find the container with id 6b7a865d2d0bbeaf2407152b5dd66d6fc9c4b0015383c8cbb1e9b7a5852585f8 Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.449670 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.459250 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.470809 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.473170 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-run-ovn\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.473201 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-cni-netd\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.473237 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-run-systemd\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.473261 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-run-openvswitch\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.473281 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0acf61bd-42c5-4566-ac29-815afead2012-env-overrides\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.473297 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-log-socket\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.473313 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0acf61bd-42c5-4566-ac29-815afead2012-ovn-node-metrics-cert\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.473340 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-systemd-units\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.473355 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-node-log\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.473440 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-cni-bin\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.473478 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-kubelet\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.473491 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-run-netns\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.473506 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-var-lib-openvswitch\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.473580 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-slash\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.473615 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-run-ovn-kubernetes\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.473637 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.473662 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0acf61bd-42c5-4566-ac29-815afead2012-ovnkube-config\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.473677 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0acf61bd-42c5-4566-ac29-815afead2012-ovnkube-script-lib\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.473693 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trdnv\" (UniqueName: \"kubernetes.io/projected/0acf61bd-42c5-4566-ac29-815afead2012-kube-api-access-trdnv\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.473709 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-etc-openvswitch\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.481684 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.494564 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.503901 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.515127 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.527107 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.534394 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.534428 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.534435 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.534448 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.534457 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:47Z","lastTransitionTime":"2026-02-23T10:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.539145 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.548292 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.560115 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.572237 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.574530 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-run-systemd\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.574565 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-run-openvswitch\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.574583 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0acf61bd-42c5-4566-ac29-815afead2012-env-overrides\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.574600 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-log-socket\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.574616 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0acf61bd-42c5-4566-ac29-815afead2012-ovn-node-metrics-cert\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.574632 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-systemd-units\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.574645 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-node-log\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.574662 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-cni-bin\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.574681 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-kubelet\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.574729 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-run-netns\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.574764 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-var-lib-openvswitch\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.574788 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-slash\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.574807 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-run-ovn-kubernetes\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.574829 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.574855 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0acf61bd-42c5-4566-ac29-815afead2012-ovnkube-config\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.574869 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0acf61bd-42c5-4566-ac29-815afead2012-ovnkube-script-lib\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.574884 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trdnv\" (UniqueName: \"kubernetes.io/projected/0acf61bd-42c5-4566-ac29-815afead2012-kube-api-access-trdnv\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.574906 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-etc-openvswitch\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.574919 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-run-ovn\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.574932 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-cni-netd\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.574965 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-kubelet\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.574984 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-cni-netd\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.575015 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-run-systemd\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.575018 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-run-openvswitch\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.575040 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-run-netns\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.575064 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-var-lib-openvswitch\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.575087 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-slash\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.575107 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-run-ovn-kubernetes\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.575129 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.575822 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0acf61bd-42c5-4566-ac29-815afead2012-env-overrides\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.575867 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-log-socket\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.575994 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0acf61bd-42c5-4566-ac29-815afead2012-ovnkube-config\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.576218 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-node-log\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.576253 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-systemd-units\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.576264 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-etc-openvswitch\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.576279 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-run-ovn\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.576309 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-cni-bin\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.576468 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0acf61bd-42c5-4566-ac29-815afead2012-ovnkube-script-lib\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.579179 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0acf61bd-42c5-4566-ac29-815afead2012-ovn-node-metrics-cert\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.584093 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.591108 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trdnv\" (UniqueName: \"kubernetes.io/projected/0acf61bd-42c5-4566-ac29-815afead2012-kube-api-access-trdnv\") pod \"ovnkube-node-9h7jb\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.594310 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.606285 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.616815 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.635438 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.636036 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.636064 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.636072 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.636086 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.636094 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:47Z","lastTransitionTime":"2026-02-23T10:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.646398 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.662414 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.681117 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:47Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.738415 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.738456 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.738467 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.738482 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.738492 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:47Z","lastTransitionTime":"2026-02-23T10:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.741921 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:47 crc kubenswrapper[4904]: W0223 10:07:47.753528 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0acf61bd_42c5_4566_ac29_815afead2012.slice/crio-a4446c01fc69dfa03d531f75c23b6b5ef208d019b5c1c78b2e551a4518eba6ad WatchSource:0}: Error finding container a4446c01fc69dfa03d531f75c23b6b5ef208d019b5c1c78b2e551a4518eba6ad: Status 404 returned error can't find the container with id a4446c01fc69dfa03d531f75c23b6b5ef208d019b5c1c78b2e551a4518eba6ad Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.840821 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.840854 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.840866 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.840882 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.840895 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:47Z","lastTransitionTime":"2026-02-23T10:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.944107 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.944148 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.944158 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.944173 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:47 crc kubenswrapper[4904]: I0223 10:07:47.944184 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:47Z","lastTransitionTime":"2026-02-23T10:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.046709 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.046773 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.046784 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.046797 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.046807 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:48Z","lastTransitionTime":"2026-02-23T10:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.149810 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.150127 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.150139 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.150157 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.150169 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:48Z","lastTransitionTime":"2026-02-23T10:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.253020 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.253060 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.253071 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.253090 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.253102 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:48Z","lastTransitionTime":"2026-02-23T10:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.273147 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.356746 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.356800 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.356817 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.356841 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.356861 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:48Z","lastTransitionTime":"2026-02-23T10:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.374125 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:29:31.002074202 +0000 UTC Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.440104 4904 generic.go:334] "Generic (PLEG): container finished" podID="0acf61bd-42c5-4566-ac29-815afead2012" containerID="30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700" exitCode=0 Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.440225 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" event={"ID":"0acf61bd-42c5-4566-ac29-815afead2012","Type":"ContainerDied","Data":"30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700"} Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.440289 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" event={"ID":"0acf61bd-42c5-4566-ac29-815afead2012","Type":"ContainerStarted","Data":"a4446c01fc69dfa03d531f75c23b6b5ef208d019b5c1c78b2e551a4518eba6ad"} Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.444948 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerStarted","Data":"7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249"} Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.445016 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerStarted","Data":"6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde"} Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.445036 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerStarted","Data":"0463176bcf354157a516838fd6cca83e66efe42c159ff3ee5a5e16c6113ebee5"} Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.447100 4904 generic.go:334] "Generic (PLEG): container finished" podID="6f16e2e7-7479-4c7d-9582-00896887abcc" containerID="1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8" exitCode=0 Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.447154 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" event={"ID":"6f16e2e7-7479-4c7d-9582-00896887abcc","Type":"ContainerDied","Data":"1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8"} Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.448983 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fm2n2" event={"ID":"65ad73a3-cf4b-49ec-b994-2d52cb43bc76","Type":"ContainerStarted","Data":"7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42"} Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.449027 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fm2n2" event={"ID":"65ad73a3-cf4b-49ec-b994-2d52cb43bc76","Type":"ContainerStarted","Data":"6b7a865d2d0bbeaf2407152b5dd66d6fc9c4b0015383c8cbb1e9b7a5852585f8"} Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.454583 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.459640 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.459672 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.459682 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.459697 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.459707 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:48Z","lastTransitionTime":"2026-02-23T10:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.476595 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.492094 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.503972 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.517245 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.530153 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.540575 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.554837 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.562575 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.562606 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.562617 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.562631 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.562643 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:48Z","lastTransitionTime":"2026-02-23T10:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.572705 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.585276 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.597672 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.608003 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.619061 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.634518 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.645829 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.658671 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.665049 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.665082 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.665090 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.665104 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.665113 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:48Z","lastTransitionTime":"2026-02-23T10:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.676440 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.691844 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.702564 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.715280 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.726749 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.735769 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.745545 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.768290 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.768325 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.768335 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.768350 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.768361 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:48Z","lastTransitionTime":"2026-02-23T10:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.769004 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.784037 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.792583 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.803065 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.820300 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:48Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.869969 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.870000 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.870008 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.870021 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.870029 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:48Z","lastTransitionTime":"2026-02-23T10:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.972245 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.972292 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.972311 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.972333 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:48 crc kubenswrapper[4904]: I0223 10:07:48.972348 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:48Z","lastTransitionTime":"2026-02-23T10:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.076051 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.076447 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.076460 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.076493 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.076506 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:49Z","lastTransitionTime":"2026-02-23T10:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.178202 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.178233 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.178242 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.178254 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.178263 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:49Z","lastTransitionTime":"2026-02-23T10:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.254911 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.254958 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.254998 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:49 crc kubenswrapper[4904]: E0223 10:07:49.255028 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:07:49 crc kubenswrapper[4904]: E0223 10:07:49.255341 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:07:49 crc kubenswrapper[4904]: E0223 10:07:49.255409 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.279655 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.279684 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.279693 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.279706 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.279728 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:49Z","lastTransitionTime":"2026-02-23T10:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.374886 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 04:18:02.408053205 +0000 UTC Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.382709 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.383067 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.383149 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.383679 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.383755 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:49Z","lastTransitionTime":"2026-02-23T10:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.455361 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" event={"ID":"0acf61bd-42c5-4566-ac29-815afead2012","Type":"ContainerStarted","Data":"9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281"} Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.455405 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" event={"ID":"0acf61bd-42c5-4566-ac29-815afead2012","Type":"ContainerStarted","Data":"71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63"} Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.455415 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" event={"ID":"0acf61bd-42c5-4566-ac29-815afead2012","Type":"ContainerStarted","Data":"db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b"} Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.455424 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" event={"ID":"0acf61bd-42c5-4566-ac29-815afead2012","Type":"ContainerStarted","Data":"fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3"} Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.455433 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" event={"ID":"0acf61bd-42c5-4566-ac29-815afead2012","Type":"ContainerStarted","Data":"2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d"} Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.455442 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" event={"ID":"0acf61bd-42c5-4566-ac29-815afead2012","Type":"ContainerStarted","Data":"ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d"} Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.456983 4904 generic.go:334] "Generic (PLEG): container finished" podID="6f16e2e7-7479-4c7d-9582-00896887abcc" containerID="2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba" exitCode=0 Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.457013 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" event={"ID":"6f16e2e7-7479-4c7d-9582-00896887abcc","Type":"ContainerDied","Data":"2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba"} Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.468213 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:49Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.485991 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.486035 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.486047 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.486062 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.486072 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:49Z","lastTransitionTime":"2026-02-23T10:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.487873 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:49Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.505298 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:49Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.519148 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:49Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.536759 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:49Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.549143 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:49Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.559896 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:49Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.569854 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:49Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.582322 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:49Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.587980 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.588017 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.588029 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.588047 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.588059 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:49Z","lastTransitionTime":"2026-02-23T10:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.600373 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:49Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.612890 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:49Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.623558 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:49Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.633787 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:49Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.646885 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:49Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.690258 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.690296 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.690305 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.690320 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.690330 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:49Z","lastTransitionTime":"2026-02-23T10:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.792561 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.792606 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.792617 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.792637 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.792649 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:49Z","lastTransitionTime":"2026-02-23T10:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.894979 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.895023 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.895038 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.895058 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.895075 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:49Z","lastTransitionTime":"2026-02-23T10:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.998442 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.998501 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.998524 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.998551 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:49 crc kubenswrapper[4904]: I0223 10:07:49.998575 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:49Z","lastTransitionTime":"2026-02-23T10:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.101397 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.101439 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.101452 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.101469 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.101481 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:50Z","lastTransitionTime":"2026-02-23T10:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.204357 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.204416 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.204436 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.204461 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.204478 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:50Z","lastTransitionTime":"2026-02-23T10:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.307356 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.307688 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.307701 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.307771 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.307791 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:50Z","lastTransitionTime":"2026-02-23T10:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.375757 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 12:27:58.045725804 +0000 UTC Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.409641 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.409680 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.409691 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.409706 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.409745 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:50Z","lastTransitionTime":"2026-02-23T10:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.462585 4904 generic.go:334] "Generic (PLEG): container finished" podID="6f16e2e7-7479-4c7d-9582-00896887abcc" containerID="e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398" exitCode=0 Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.462640 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" event={"ID":"6f16e2e7-7479-4c7d-9582-00896887abcc","Type":"ContainerDied","Data":"e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398"} Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.490813 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:50Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.506994 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:50Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.515180 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.515212 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.515221 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.515235 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.515244 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:50Z","lastTransitionTime":"2026-02-23T10:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.522561 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:50Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.537207 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:50Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.550457 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:50Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.562152 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:50Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.576468 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:50Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.590307 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:50Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.604583 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:50Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.622104 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.622137 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.622147 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.622159 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.622167 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:50Z","lastTransitionTime":"2026-02-23T10:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.624205 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:50Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.635326 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:50Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.645746 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:50Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.655087 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:50Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.671703 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:50Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.724197 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.724236 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.724247 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.724264 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.724277 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:50Z","lastTransitionTime":"2026-02-23T10:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.826276 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.826575 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.826741 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.826896 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.827028 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:50Z","lastTransitionTime":"2026-02-23T10:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.929596 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.929650 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.929667 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.929689 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:50 crc kubenswrapper[4904]: I0223 10:07:50.929705 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:50Z","lastTransitionTime":"2026-02-23T10:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.031893 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.031937 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.031949 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.031965 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.031978 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:51Z","lastTransitionTime":"2026-02-23T10:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.134293 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.134341 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.134355 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.134374 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.134388 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:51Z","lastTransitionTime":"2026-02-23T10:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.237518 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.237570 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.237582 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.237602 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.237615 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:51Z","lastTransitionTime":"2026-02-23T10:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.254640 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:51 crc kubenswrapper[4904]: E0223 10:07:51.254897 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.255275 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:51 crc kubenswrapper[4904]: E0223 10:07:51.255405 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.256037 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:51 crc kubenswrapper[4904]: E0223 10:07:51.256182 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.341338 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.341369 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.341378 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.341390 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.341399 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:51Z","lastTransitionTime":"2026-02-23T10:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.375889 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 19:57:50.234065874 +0000 UTC Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.444416 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.444467 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.444487 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.444518 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.444539 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:51Z","lastTransitionTime":"2026-02-23T10:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.468631 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" event={"ID":"0acf61bd-42c5-4566-ac29-815afead2012","Type":"ContainerStarted","Data":"6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d"} Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.472461 4904 generic.go:334] "Generic (PLEG): container finished" podID="6f16e2e7-7479-4c7d-9582-00896887abcc" containerID="03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e" exitCode=0 Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.472532 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" event={"ID":"6f16e2e7-7479-4c7d-9582-00896887abcc","Type":"ContainerDied","Data":"03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e"} Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.490395 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:51Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.505624 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:51Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.517084 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:51Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.528281 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:51Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.539020 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:51Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.547184 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.547310 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.547390 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.547460 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.547521 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:51Z","lastTransitionTime":"2026-02-23T10:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.553057 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:51Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.575773 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:51Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.591010 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:51Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.607577 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:51Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.617667 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:51Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.628104 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:51Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.638755 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:51Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.650104 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.650131 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.650139 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.650152 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.650162 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:51Z","lastTransitionTime":"2026-02-23T10:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.650668 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:51Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.662467 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:51Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.752182 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.752218 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.752229 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.752243 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.752252 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:51Z","lastTransitionTime":"2026-02-23T10:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.854915 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.854964 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.854973 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.854989 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.855000 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:51Z","lastTransitionTime":"2026-02-23T10:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.956967 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.957007 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.957014 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.957030 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:51 crc kubenswrapper[4904]: I0223 10:07:51.957038 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:51Z","lastTransitionTime":"2026-02-23T10:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.059173 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.059206 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.059219 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.059233 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.059242 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:52Z","lastTransitionTime":"2026-02-23T10:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.161512 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.161730 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.161739 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.161751 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.161762 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:52Z","lastTransitionTime":"2026-02-23T10:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.272159 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.272193 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.272202 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.272215 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.272227 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:52Z","lastTransitionTime":"2026-02-23T10:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.374435 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.374490 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.374501 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.374564 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.374578 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:52Z","lastTransitionTime":"2026-02-23T10:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.376629 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 08:31:44.75804245 +0000 UTC Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.476527 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.476558 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.476566 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.476578 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.476586 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:52Z","lastTransitionTime":"2026-02-23T10:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.479660 4904 generic.go:334] "Generic (PLEG): container finished" podID="6f16e2e7-7479-4c7d-9582-00896887abcc" containerID="3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e" exitCode=0 Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.479692 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" event={"ID":"6f16e2e7-7479-4c7d-9582-00896887abcc","Type":"ContainerDied","Data":"3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e"} Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.502789 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:52Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.516413 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:52Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.536651 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:52Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.551682 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:52Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.561671 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:52Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.572332 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:52Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.579359 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.579402 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.579414 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.579434 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.579445 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:52Z","lastTransitionTime":"2026-02-23T10:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.586926 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:52Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.599882 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:52Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.620001 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:52Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.633101 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:52Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.653053 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:52Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.664742 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:52Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.677562 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:52Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.683159 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.683187 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.683195 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.683208 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.683218 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:52Z","lastTransitionTime":"2026-02-23T10:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.691347 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:52Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.785282 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.785343 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.785356 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.785375 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.785387 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:52Z","lastTransitionTime":"2026-02-23T10:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.888397 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.888443 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.888453 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.888470 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.888481 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:52Z","lastTransitionTime":"2026-02-23T10:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.991082 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.991111 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.991119 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.991133 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:52 crc kubenswrapper[4904]: I0223 10:07:52.991141 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:52Z","lastTransitionTime":"2026-02-23T10:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.093292 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.093324 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.093332 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.093345 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.093354 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:53Z","lastTransitionTime":"2026-02-23T10:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.195461 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.195508 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.195518 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.195538 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.195549 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:53Z","lastTransitionTime":"2026-02-23T10:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.254279 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:53 crc kubenswrapper[4904]: E0223 10:07:53.254395 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.254297 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:53 crc kubenswrapper[4904]: E0223 10:07:53.254454 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.254283 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:53 crc kubenswrapper[4904]: E0223 10:07:53.254496 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.296940 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-xrpzc"] Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.297303 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xrpzc" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.297343 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.297364 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.297371 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.297387 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.297395 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:53Z","lastTransitionTime":"2026-02-23T10:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.299134 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.299567 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.299625 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.300446 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.318038 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.326903 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d-host\") pod \"node-ca-xrpzc\" (UID: \"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\") " pod="openshift-image-registry/node-ca-xrpzc" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.327085 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g98lh\" (UniqueName: \"kubernetes.io/projected/46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d-kube-api-access-g98lh\") pod \"node-ca-xrpzc\" (UID: \"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\") " pod="openshift-image-registry/node-ca-xrpzc" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.327169 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d-serviceca\") pod \"node-ca-xrpzc\" (UID: \"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\") " pod="openshift-image-registry/node-ca-xrpzc" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.330806 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.343280 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.352833 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.365395 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.376786 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 10:56:59.755600755 +0000 UTC Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.377686 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.389659 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.398087 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.399608 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.399791 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.399902 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.400044 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.400164 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:53Z","lastTransitionTime":"2026-02-23T10:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.409220 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.427616 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.428165 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d-host\") pod \"node-ca-xrpzc\" (UID: \"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\") " pod="openshift-image-registry/node-ca-xrpzc" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.428225 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g98lh\" (UniqueName: \"kubernetes.io/projected/46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d-kube-api-access-g98lh\") pod \"node-ca-xrpzc\" (UID: \"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\") " pod="openshift-image-registry/node-ca-xrpzc" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.428244 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d-serviceca\") pod \"node-ca-xrpzc\" (UID: \"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\") " pod="openshift-image-registry/node-ca-xrpzc" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.428407 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d-host\") pod \"node-ca-xrpzc\" (UID: \"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\") " pod="openshift-image-registry/node-ca-xrpzc" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.429152 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d-serviceca\") pod \"node-ca-xrpzc\" (UID: \"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\") " pod="openshift-image-registry/node-ca-xrpzc" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.437894 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.445457 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g98lh\" (UniqueName: \"kubernetes.io/projected/46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d-kube-api-access-g98lh\") pod \"node-ca-xrpzc\" (UID: \"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\") " pod="openshift-image-registry/node-ca-xrpzc" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.447026 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.455566 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.467254 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.476609 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xrpzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g98lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xrpzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.478588 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.478730 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.478810 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.478880 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.478946 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:53Z","lastTransitionTime":"2026-02-23T10:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.487518 4904 generic.go:334] "Generic (PLEG): container finished" podID="6f16e2e7-7479-4c7d-9582-00896887abcc" containerID="45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb" exitCode=0 Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.487679 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" event={"ID":"6f16e2e7-7479-4c7d-9582-00896887abcc","Type":"ContainerDied","Data":"45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb"} Feb 23 10:07:53 crc kubenswrapper[4904]: E0223 10:07:53.492651 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.495083 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.495113 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.495124 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.495140 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.495152 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:53Z","lastTransitionTime":"2026-02-23T10:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.501448 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: E0223 10:07:53.510350 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.514247 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.514405 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.514429 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.514436 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.514449 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.514458 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:53Z","lastTransitionTime":"2026-02-23T10:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:53 crc kubenswrapper[4904]: E0223 10:07:53.538113 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.549902 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.549945 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.549957 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.549973 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.549981 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:53Z","lastTransitionTime":"2026-02-23T10:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:53 crc kubenswrapper[4904]: E0223 10:07:53.573212 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.574272 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.575954 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.575980 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.575987 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.576000 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.576008 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:53Z","lastTransitionTime":"2026-02-23T10:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.587864 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: E0223 10:07:53.589577 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: E0223 10:07:53.589730 4904 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.591365 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.591396 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.591405 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.591419 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.591427 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:53Z","lastTransitionTime":"2026-02-23T10:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.602137 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.610696 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xrpzc" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.612253 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: W0223 10:07:53.621872 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46ee48db_2cbe_4dbf_a8dc_f6657da8cd7d.slice/crio-629fc70951dbb7110af7245230aa8f6870d16fec07903dc90adf4d58f1608ecb WatchSource:0}: Error finding container 629fc70951dbb7110af7245230aa8f6870d16fec07903dc90adf4d58f1608ecb: Status 404 returned error can't find the container with id 629fc70951dbb7110af7245230aa8f6870d16fec07903dc90adf4d58f1608ecb Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.625294 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.634787 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xrpzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g98lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xrpzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.648354 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.669438 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.688951 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.693529 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.693557 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.693566 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.693579 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.693587 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:53Z","lastTransitionTime":"2026-02-23T10:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.701034 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.711567 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.722931 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.734332 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.795264 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.795334 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.795356 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.795391 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.795417 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:53Z","lastTransitionTime":"2026-02-23T10:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.898243 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.898296 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.898307 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.898325 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:53 crc kubenswrapper[4904]: I0223 10:07:53.898336 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:53Z","lastTransitionTime":"2026-02-23T10:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.000767 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.000815 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.000827 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.000845 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.000857 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:54Z","lastTransitionTime":"2026-02-23T10:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.045863 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.045945 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:54 crc kubenswrapper[4904]: E0223 10:07:54.046062 4904 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 10:07:54 crc kubenswrapper[4904]: E0223 10:07:54.046105 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 10:08:26.046093644 +0000 UTC m=+139.466467157 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 10:07:54 crc kubenswrapper[4904]: E0223 10:07:54.046264 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:08:26.046257499 +0000 UTC m=+139.466631012 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.103274 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.103313 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.103321 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.103336 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.103345 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:54Z","lastTransitionTime":"2026-02-23T10:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.147133 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.147187 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.147214 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:54 crc kubenswrapper[4904]: E0223 10:07:54.147322 4904 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 10:07:54 crc kubenswrapper[4904]: E0223 10:07:54.147364 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 10:07:54 crc kubenswrapper[4904]: E0223 10:07:54.147376 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 10:08:26.147360875 +0000 UTC m=+139.567734388 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 10:07:54 crc kubenswrapper[4904]: E0223 10:07:54.147381 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 10:07:54 crc kubenswrapper[4904]: E0223 10:07:54.147396 4904 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:54 crc kubenswrapper[4904]: E0223 10:07:54.147402 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 10:07:54 crc kubenswrapper[4904]: E0223 10:07:54.147442 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 10:07:54 crc kubenswrapper[4904]: E0223 10:07:54.147462 4904 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:54 crc kubenswrapper[4904]: E0223 10:07:54.147442 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 10:08:26.147427317 +0000 UTC m=+139.567800840 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:54 crc kubenswrapper[4904]: E0223 10:07:54.147547 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 10:08:26.1475252 +0000 UTC m=+139.567898733 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.205520 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.205546 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.205553 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.205566 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.205575 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:54Z","lastTransitionTime":"2026-02-23T10:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.308402 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.308440 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.308449 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.308463 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.308472 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:54Z","lastTransitionTime":"2026-02-23T10:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.377508 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 01:22:15.784917952 +0000 UTC Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.411103 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.411130 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.411138 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.411151 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.411161 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:54Z","lastTransitionTime":"2026-02-23T10:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.492516 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xrpzc" event={"ID":"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d","Type":"ContainerStarted","Data":"e4aa737d0b692c2a8cd62624d4e51f6105499e49a967a4c54a7ee2226a480e71"} Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.492593 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xrpzc" event={"ID":"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d","Type":"ContainerStarted","Data":"629fc70951dbb7110af7245230aa8f6870d16fec07903dc90adf4d58f1608ecb"} Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.497588 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" event={"ID":"6f16e2e7-7479-4c7d-9582-00896887abcc","Type":"ContainerStarted","Data":"f4c51a5e1dbcc617624918a80e94145376683414f7e43429e95e657b69b9ed84"} Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.502207 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" event={"ID":"0acf61bd-42c5-4566-ac29-815afead2012","Type":"ContainerStarted","Data":"320b72ddbaeba8eb33036734f8e8fb6eebad8d1a978fd431d8978f07b84d985e"} Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.502728 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.502769 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.502787 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.533527 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.533554 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.533563 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.533576 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.533586 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:54Z","lastTransitionTime":"2026-02-23T10:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.535781 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.553645 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.554938 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.556250 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.567176 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.585539 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.601315 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.616043 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.626977 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.635740 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.635777 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.635788 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.635803 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.635815 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:54Z","lastTransitionTime":"2026-02-23T10:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.637810 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.647226 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.661940 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.672994 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.682230 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.691849 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.704430 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.713339 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xrpzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4aa737d0b692c2a8cd62624d4e51f6105499e49a967a4c54a7ee2226a480e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g98lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xrpzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.723390 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.731771 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.738299 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.738332 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.738341 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.738353 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.738363 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:54Z","lastTransitionTime":"2026-02-23T10:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.741930 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.752831 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.760837 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xrpzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4aa737d0b692c2a8cd62624d4e51f6105499e49a967a4c54a7ee2226a480e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g98lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xrpzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.773621 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.787017 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c51a5e1dbcc617624918a80e94145376683414f7e43429e95e657b69b9ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.799609 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.806874 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.820841 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.831692 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.841140 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.841176 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.841188 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.841205 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.841216 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:54Z","lastTransitionTime":"2026-02-23T10:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.843673 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.853579 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.862604 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.873448 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.893682 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320b72ddbaeba8eb33036734f8e8fb6eebad8d1a978fd431d8978f07b84d985e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.915351 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.927474 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.937411 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.942835 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.942857 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.942864 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.942876 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.942885 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:54Z","lastTransitionTime":"2026-02-23T10:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.946472 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.955001 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.964099 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.975282 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c51a5e1dbcc617624918a80e94145376683414f7e43429e95e657b69b9ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.983415 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:54 crc kubenswrapper[4904]: I0223 10:07:54.994039 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:54Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.009761 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320b72ddbaeba8eb33036734f8e8fb6eebad8d1a978fd431d8978f07b84d985e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:55Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.020918 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:55Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.029129 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:55Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.038503 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:55Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.044484 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.044609 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.044687 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.044798 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.045037 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:55Z","lastTransitionTime":"2026-02-23T10:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.050006 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:55Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.058656 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xrpzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4aa737d0b692c2a8cd62624d4e51f6105499e49a967a4c54a7ee2226a480e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g98lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xrpzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:55Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.147605 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.147642 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.147652 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.147666 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.147675 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:55Z","lastTransitionTime":"2026-02-23T10:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.250479 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.250520 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.250528 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.250547 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.250555 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:55Z","lastTransitionTime":"2026-02-23T10:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.255004 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.255065 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:55 crc kubenswrapper[4904]: E0223 10:07:55.255107 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:07:55 crc kubenswrapper[4904]: E0223 10:07:55.255172 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.255204 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:55 crc kubenswrapper[4904]: E0223 10:07:55.255303 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.353537 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.353583 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.353600 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.353620 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.353635 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:55Z","lastTransitionTime":"2026-02-23T10:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.378818 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 04:51:07.628361003 +0000 UTC Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.456116 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.456393 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.456472 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.456543 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.456621 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:55Z","lastTransitionTime":"2026-02-23T10:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.559388 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.559425 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.559435 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.559450 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.559460 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:55Z","lastTransitionTime":"2026-02-23T10:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.662649 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.664304 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.664892 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.665009 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.665128 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:55Z","lastTransitionTime":"2026-02-23T10:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.767930 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.768263 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.768377 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.768482 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.768556 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:55Z","lastTransitionTime":"2026-02-23T10:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.871949 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.872161 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.872226 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.872376 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.872441 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:55Z","lastTransitionTime":"2026-02-23T10:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.974939 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.974966 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.974973 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.974987 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:55 crc kubenswrapper[4904]: I0223 10:07:55.974995 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:55Z","lastTransitionTime":"2026-02-23T10:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.077045 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.077333 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.077342 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.077356 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.077365 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:56Z","lastTransitionTime":"2026-02-23T10:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.179561 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.179596 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.179605 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.179618 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.179628 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:56Z","lastTransitionTime":"2026-02-23T10:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.281736 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.281772 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.281782 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.281797 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.281806 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:56Z","lastTransitionTime":"2026-02-23T10:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.380200 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 05:14:21.954828357 +0000 UTC Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.384571 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.384609 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.384617 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.384631 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.384640 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:56Z","lastTransitionTime":"2026-02-23T10:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.486841 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.486876 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.486886 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.486902 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.486914 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:56Z","lastTransitionTime":"2026-02-23T10:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.508921 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9h7jb_0acf61bd-42c5-4566-ac29-815afead2012/ovnkube-controller/0.log" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.512173 4904 generic.go:334] "Generic (PLEG): container finished" podID="0acf61bd-42c5-4566-ac29-815afead2012" containerID="320b72ddbaeba8eb33036734f8e8fb6eebad8d1a978fd431d8978f07b84d985e" exitCode=1 Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.512206 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" event={"ID":"0acf61bd-42c5-4566-ac29-815afead2012","Type":"ContainerDied","Data":"320b72ddbaeba8eb33036734f8e8fb6eebad8d1a978fd431d8978f07b84d985e"} Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.512830 4904 scope.go:117] "RemoveContainer" containerID="320b72ddbaeba8eb33036734f8e8fb6eebad8d1a978fd431d8978f07b84d985e" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.523502 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.536005 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.551595 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.562542 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xrpzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4aa737d0b692c2a8cd62624d4e51f6105499e49a967a4c54a7ee2226a480e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g98lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xrpzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.584664 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.589674 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.589709 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.589758 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.589778 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.589790 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:56Z","lastTransitionTime":"2026-02-23T10:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.599479 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.609862 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.620973 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.632538 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.644079 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.658574 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c51a5e1dbcc617624918a80e94145376683414f7e43429e95e657b69b9ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.680972 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.691658 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.691694 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.691704 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.691732 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.691744 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:56Z","lastTransitionTime":"2026-02-23T10:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.692212 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.709008 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320b72ddbaeba8eb33036734f8e8fb6eebad8d1a978fd431d8978f07b84d985e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320b72ddbaeba8eb33036734f8e8fb6eebad8d1a978fd431d8978f07b84d985e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T10:07:56Z\\\",\\\"message\\\":\\\":311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 10:07:56.339995 6622 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 10:07:56.340011 6622 handler.go:208] Removed *v1.Node event handler 7\\\\nI0223 10:07:56.339937 6622 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 10:07:56.340020 6622 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 10:07:56.340124 6622 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 10:07:56.340187 6622 factory.go:656] Stopping watch factory\\\\nI0223 10:07:56.340196 6622 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0223 10:07:56.340282 6622 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 10:07:56.340304 6622 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 10:07:56.340366 6622 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 10:07:56.340468 6622 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 10:07:56.340636 6622 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.718087 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.793941 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.793978 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.793988 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.794010 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.794021 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:56Z","lastTransitionTime":"2026-02-23T10:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.903191 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.903236 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.903248 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.903262 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:56 crc kubenswrapper[4904]: I0223 10:07:56.903272 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:56Z","lastTransitionTime":"2026-02-23T10:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.005081 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.005118 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.005129 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.005144 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.005154 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:57Z","lastTransitionTime":"2026-02-23T10:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.107847 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.107901 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.107909 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.107925 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.107934 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:57Z","lastTransitionTime":"2026-02-23T10:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.210113 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.210141 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.210150 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.210162 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.210170 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:57Z","lastTransitionTime":"2026-02-23T10:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.254905 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.254950 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:57 crc kubenswrapper[4904]: E0223 10:07:57.255036 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.255131 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:57 crc kubenswrapper[4904]: E0223 10:07:57.255213 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:07:57 crc kubenswrapper[4904]: E0223 10:07:57.255365 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.267952 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.286772 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.296849 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xrpzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4aa737d0b692c2a8cd62624d4e51f6105499e49a967a4c54a7ee2226a480e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g98lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xrpzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.307544 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.312304 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.312339 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.312349 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.312365 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.312378 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:57Z","lastTransitionTime":"2026-02-23T10:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.317729 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.330243 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.343914 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.361425 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.373503 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.380683 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 22:56:47.862452764 +0000 UTC Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.387960 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c51a5e1dbcc617624918a80e94145376683414f7e43429e95e657b69b9ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.410992 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.414604 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.414663 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.414678 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.414697 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.414710 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:57Z","lastTransitionTime":"2026-02-23T10:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.426211 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.446044 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320b72ddbaeba8eb33036734f8e8fb6eebad8d1a978fd431d8978f07b84d985e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320b72ddbaeba8eb33036734f8e8fb6eebad8d1a978fd431d8978f07b84d985e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T10:07:56Z\\\",\\\"message\\\":\\\":311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 10:07:56.339995 6622 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 10:07:56.340011 6622 handler.go:208] Removed *v1.Node event handler 7\\\\nI0223 10:07:56.339937 6622 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 10:07:56.340020 6622 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 10:07:56.340124 6622 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 10:07:56.340187 6622 factory.go:656] Stopping watch factory\\\\nI0223 10:07:56.340196 6622 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0223 10:07:56.340282 6622 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 10:07:56.340304 6622 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 10:07:56.340366 6622 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 10:07:56.340468 6622 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 10:07:56.340636 6622 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.460108 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.476439 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.517830 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.517879 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.517889 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.517909 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.517919 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:57Z","lastTransitionTime":"2026-02-23T10:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.520803 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9h7jb_0acf61bd-42c5-4566-ac29-815afead2012/ovnkube-controller/1.log" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.521851 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9h7jb_0acf61bd-42c5-4566-ac29-815afead2012/ovnkube-controller/0.log" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.525504 4904 generic.go:334] "Generic (PLEG): container finished" podID="0acf61bd-42c5-4566-ac29-815afead2012" containerID="ed0ec9aed0be046bb183d9bf1b6bc4e55a5912101b6948834faa89f5f8b781b9" exitCode=1 Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.525615 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" event={"ID":"0acf61bd-42c5-4566-ac29-815afead2012","Type":"ContainerDied","Data":"ed0ec9aed0be046bb183d9bf1b6bc4e55a5912101b6948834faa89f5f8b781b9"} Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.525791 4904 scope.go:117] "RemoveContainer" containerID="320b72ddbaeba8eb33036734f8e8fb6eebad8d1a978fd431d8978f07b84d985e" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.526258 4904 scope.go:117] "RemoveContainer" containerID="ed0ec9aed0be046bb183d9bf1b6bc4e55a5912101b6948834faa89f5f8b781b9" Feb 23 10:07:57 crc kubenswrapper[4904]: E0223 10:07:57.526586 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9h7jb_openshift-ovn-kubernetes(0acf61bd-42c5-4566-ac29-815afead2012)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" podUID="0acf61bd-42c5-4566-ac29-815afead2012" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.540956 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.558022 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.580108 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0ec9aed0be046bb183d9bf1b6bc4e55a5912101b6948834faa89f5f8b781b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320b72ddbaeba8eb33036734f8e8fb6eebad8d1a978fd431d8978f07b84d985e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T10:07:56Z\\\",\\\"message\\\":\\\":311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 10:07:56.339995 6622 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 10:07:56.340011 6622 handler.go:208] Removed *v1.Node event handler 7\\\\nI0223 10:07:56.339937 6622 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 10:07:56.340020 6622 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 10:07:56.340124 6622 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 10:07:56.340187 6622 factory.go:656] Stopping watch factory\\\\nI0223 10:07:56.340196 6622 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0223 10:07:56.340282 6622 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 10:07:56.340304 6622 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 10:07:56.340366 6622 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 10:07:56.340468 6622 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 10:07:56.340636 6622 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed0ec9aed0be046bb183d9bf1b6bc4e55a5912101b6948834faa89f5f8b781b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T10:07:57Z\\\",\\\"message\\\":\\\"rio-crc after 0 failed attempt(s)\\\\nI0223 10:07:57.246289 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0223 10:07:57.246312 6783 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0223 10:07:57.246279 6783 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0223 10:07:57.246268 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-fm2n2 after 0 failed attempt(s)\\\\nI0223 10:07:57.246327 6783 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-fm2n2\\\\nI0223 10:07:57.246314 6783 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0223 10:07:57.246305 6783 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0223 10:07:57.246345 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0223 10:07:57.246349 6783 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0223 10:07:57.246168 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.597030 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.610540 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.620954 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.620998 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.621011 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.621032 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.621043 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:57Z","lastTransitionTime":"2026-02-23T10:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.623751 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.637880 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.651422 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xrpzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4aa737d0b692c2a8cd62624d4e51f6105499e49a967a4c54a7ee2226a480e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g98lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xrpzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.676843 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.691063 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.708211 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.720702 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.723944 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.724001 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.724012 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.724034 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.724051 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:57Z","lastTransitionTime":"2026-02-23T10:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.733364 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.745286 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.763983 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c51a5e1dbcc617624918a80e94145376683414f7e43429e95e657b69b9ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.827966 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.828007 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.828021 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.828044 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.828058 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:57Z","lastTransitionTime":"2026-02-23T10:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.931915 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.931973 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.931985 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.932002 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:57 crc kubenswrapper[4904]: I0223 10:07:57.932018 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:57Z","lastTransitionTime":"2026-02-23T10:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.034619 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.034653 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.034661 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.034673 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.034682 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:58Z","lastTransitionTime":"2026-02-23T10:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.137851 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.137920 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.137937 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.137961 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.137977 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:58Z","lastTransitionTime":"2026-02-23T10:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.240886 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.240934 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.240947 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.240964 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.240977 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:58Z","lastTransitionTime":"2026-02-23T10:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.343588 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.343632 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.343641 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.343657 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.343666 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:58Z","lastTransitionTime":"2026-02-23T10:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.381542 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 02:01:02.666643407 +0000 UTC Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.446527 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.446598 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.446620 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.446650 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.446669 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:58Z","lastTransitionTime":"2026-02-23T10:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.530431 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9h7jb_0acf61bd-42c5-4566-ac29-815afead2012/ovnkube-controller/1.log" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.534882 4904 scope.go:117] "RemoveContainer" containerID="ed0ec9aed0be046bb183d9bf1b6bc4e55a5912101b6948834faa89f5f8b781b9" Feb 23 10:07:58 crc kubenswrapper[4904]: E0223 10:07:58.535078 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9h7jb_openshift-ovn-kubernetes(0acf61bd-42c5-4566-ac29-815afead2012)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" podUID="0acf61bd-42c5-4566-ac29-815afead2012" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.547862 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:58Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.549688 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.549748 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.549758 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.549771 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.549780 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:58Z","lastTransitionTime":"2026-02-23T10:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.558202 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:58Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.568954 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:58Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.580288 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:58Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.588529 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xrpzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4aa737d0b692c2a8cd62624d4e51f6105499e49a967a4c54a7ee2226a480e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g98lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xrpzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:58Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.599371 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:58Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.612313 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c51a5e1dbcc617624918a80e94145376683414f7e43429e95e657b69b9ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:58Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.636110 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:58Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.652650 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:58Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.653901 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.653949 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.653965 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.653990 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.654007 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:58Z","lastTransitionTime":"2026-02-23T10:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.664231 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:58Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.676673 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:58Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.694780 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:58Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.704915 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:58Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.714743 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:58Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.730043 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0ec9aed0be046bb183d9bf1b6bc4e55a5912101b6948834faa89f5f8b781b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed0ec9aed0be046bb183d9bf1b6bc4e55a5912101b6948834faa89f5f8b781b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T10:07:57Z\\\",\\\"message\\\":\\\"rio-crc after 0 failed attempt(s)\\\\nI0223 10:07:57.246289 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0223 10:07:57.246312 6783 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0223 10:07:57.246279 6783 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0223 10:07:57.246268 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-fm2n2 after 0 failed attempt(s)\\\\nI0223 10:07:57.246327 6783 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-fm2n2\\\\nI0223 10:07:57.246314 6783 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0223 10:07:57.246305 6783 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0223 10:07:57.246345 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0223 10:07:57.246349 6783 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0223 10:07:57.246168 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9h7jb_openshift-ovn-kubernetes(0acf61bd-42c5-4566-ac29-815afead2012)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:58Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.756828 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.756853 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.756861 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.756875 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.756883 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:58Z","lastTransitionTime":"2026-02-23T10:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.858876 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.858904 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.858911 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.858924 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.858932 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:58Z","lastTransitionTime":"2026-02-23T10:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.961002 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.961065 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.961076 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.961093 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:58 crc kubenswrapper[4904]: I0223 10:07:58.961105 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:58Z","lastTransitionTime":"2026-02-23T10:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.063495 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.063528 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.063537 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.063610 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.063627 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:59Z","lastTransitionTime":"2026-02-23T10:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.165116 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n"] Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.165584 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.173495 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.173683 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.184138 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.184172 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.184181 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.184194 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.184203 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:59Z","lastTransitionTime":"2026-02-23T10:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.201356 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:59Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.235383 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:59Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.246919 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:59Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.255089 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.255103 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.255103 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:07:59 crc kubenswrapper[4904]: E0223 10:07:59.255194 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:07:59 crc kubenswrapper[4904]: E0223 10:07:59.255272 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:07:59 crc kubenswrapper[4904]: E0223 10:07:59.255342 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.258279 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:59Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.267864 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xrpzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4aa737d0b692c2a8cd62624d4e51f6105499e49a967a4c54a7ee2226a480e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g98lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xrpzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:59Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.280158 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:59Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.286224 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.286267 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.286279 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.286296 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.286308 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:59Z","lastTransitionTime":"2026-02-23T10:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.297542 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:59Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.300356 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e66580d6-ac94-4391-ae25-c2a99a3cf3bd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cnx2n\" (UID: \"e66580d6-ac94-4391-ae25-c2a99a3cf3bd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.300405 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e66580d6-ac94-4391-ae25-c2a99a3cf3bd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cnx2n\" (UID: \"e66580d6-ac94-4391-ae25-c2a99a3cf3bd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.300446 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j68r\" (UniqueName: \"kubernetes.io/projected/e66580d6-ac94-4391-ae25-c2a99a3cf3bd-kube-api-access-4j68r\") pod \"ovnkube-control-plane-749d76644c-cnx2n\" (UID: \"e66580d6-ac94-4391-ae25-c2a99a3cf3bd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.300462 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e66580d6-ac94-4391-ae25-c2a99a3cf3bd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cnx2n\" (UID: \"e66580d6-ac94-4391-ae25-c2a99a3cf3bd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.311455 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c51a5e1dbcc617624918a80e94145376683414f7e43429e95e657b69b9ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:59Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.331611 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:59Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.344467 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:59Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.355029 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:59Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.367027 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:59Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.377263 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:59Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.382558 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 05:11:28.988896291 +0000 UTC Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.388328 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.388365 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.388376 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.388394 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.388406 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:59Z","lastTransitionTime":"2026-02-23T10:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.391214 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:59Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.401046 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e66580d6-ac94-4391-ae25-c2a99a3cf3bd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cnx2n\" (UID: \"e66580d6-ac94-4391-ae25-c2a99a3cf3bd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.401083 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e66580d6-ac94-4391-ae25-c2a99a3cf3bd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cnx2n\" (UID: \"e66580d6-ac94-4391-ae25-c2a99a3cf3bd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.401126 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e66580d6-ac94-4391-ae25-c2a99a3cf3bd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cnx2n\" (UID: \"e66580d6-ac94-4391-ae25-c2a99a3cf3bd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.401157 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j68r\" (UniqueName: \"kubernetes.io/projected/e66580d6-ac94-4391-ae25-c2a99a3cf3bd-kube-api-access-4j68r\") pod \"ovnkube-control-plane-749d76644c-cnx2n\" (UID: \"e66580d6-ac94-4391-ae25-c2a99a3cf3bd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.401653 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e66580d6-ac94-4391-ae25-c2a99a3cf3bd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cnx2n\" (UID: \"e66580d6-ac94-4391-ae25-c2a99a3cf3bd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.402013 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e66580d6-ac94-4391-ae25-c2a99a3cf3bd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cnx2n\" (UID: \"e66580d6-ac94-4391-ae25-c2a99a3cf3bd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.408009 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e66580d6-ac94-4391-ae25-c2a99a3cf3bd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cnx2n\" (UID: \"e66580d6-ac94-4391-ae25-c2a99a3cf3bd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.409577 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0ec9aed0be046bb183d9bf1b6bc4e55a5912101b6948834faa89f5f8b781b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed0ec9aed0be046bb183d9bf1b6bc4e55a5912101b6948834faa89f5f8b781b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T10:07:57Z\\\",\\\"message\\\":\\\"rio-crc after 0 failed attempt(s)\\\\nI0223 10:07:57.246289 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0223 10:07:57.246312 6783 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0223 10:07:57.246279 6783 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0223 10:07:57.246268 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-fm2n2 after 0 failed attempt(s)\\\\nI0223 10:07:57.246327 6783 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-fm2n2\\\\nI0223 10:07:57.246314 6783 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0223 10:07:57.246305 6783 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0223 10:07:57.246345 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0223 10:07:57.246349 6783 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0223 10:07:57.246168 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9h7jb_openshift-ovn-kubernetes(0acf61bd-42c5-4566-ac29-815afead2012)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:59Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.416235 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j68r\" (UniqueName: \"kubernetes.io/projected/e66580d6-ac94-4391-ae25-c2a99a3cf3bd-kube-api-access-4j68r\") pod \"ovnkube-control-plane-749d76644c-cnx2n\" (UID: \"e66580d6-ac94-4391-ae25-c2a99a3cf3bd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.424031 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e66580d6-ac94-4391-ae25-c2a99a3cf3bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j68r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j68r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cnx2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:59Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.490926 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.491206 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.491317 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.491392 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.491463 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:59Z","lastTransitionTime":"2026-02-23T10:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.496396 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" Feb 23 10:07:59 crc kubenswrapper[4904]: W0223 10:07:59.509286 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode66580d6_ac94_4391_ae25_c2a99a3cf3bd.slice/crio-763339e56b2c6709bc1948f66f45a7eabecff7c1a07cee6f9d45f8ff577dd5f3 WatchSource:0}: Error finding container 763339e56b2c6709bc1948f66f45a7eabecff7c1a07cee6f9d45f8ff577dd5f3: Status 404 returned error can't find the container with id 763339e56b2c6709bc1948f66f45a7eabecff7c1a07cee6f9d45f8ff577dd5f3 Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.541417 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" event={"ID":"e66580d6-ac94-4391-ae25-c2a99a3cf3bd","Type":"ContainerStarted","Data":"763339e56b2c6709bc1948f66f45a7eabecff7c1a07cee6f9d45f8ff577dd5f3"} Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.594021 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.594058 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.594069 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.594085 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.594097 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:59Z","lastTransitionTime":"2026-02-23T10:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.696653 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.696702 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.696728 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.696745 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.696757 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:59Z","lastTransitionTime":"2026-02-23T10:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.798470 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.798512 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.798526 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.798544 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.798554 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:59Z","lastTransitionTime":"2026-02-23T10:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.900577 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.900620 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.900627 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.900641 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.900651 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:07:59Z","lastTransitionTime":"2026-02-23T10:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.936485 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-rmw4r"] Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.936881 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:07:59 crc kubenswrapper[4904]: E0223 10:07:59.936937 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.947383 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:59Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.961536 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:59Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.978544 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0ec9aed0be046bb183d9bf1b6bc4e55a5912101b6948834faa89f5f8b781b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed0ec9aed0be046bb183d9bf1b6bc4e55a5912101b6948834faa89f5f8b781b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T10:07:57Z\\\",\\\"message\\\":\\\"rio-crc after 0 failed attempt(s)\\\\nI0223 10:07:57.246289 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0223 10:07:57.246312 6783 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0223 10:07:57.246279 6783 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0223 10:07:57.246268 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-fm2n2 after 0 failed attempt(s)\\\\nI0223 10:07:57.246327 6783 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-fm2n2\\\\nI0223 10:07:57.246314 6783 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0223 10:07:57.246305 6783 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0223 10:07:57.246345 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0223 10:07:57.246349 6783 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0223 10:07:57.246168 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9h7jb_openshift-ovn-kubernetes(0acf61bd-42c5-4566-ac29-815afead2012)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:59Z is after 2025-08-24T17:21:41Z" Feb 23 10:07:59 crc kubenswrapper[4904]: I0223 10:07:59.988758 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rmw4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ad99ed9-56d8-464c-94ce-e861240dd0a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn9mm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn9mm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rmw4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:59Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.003796 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.003849 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.003872 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.003894 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.003909 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:00Z","lastTransitionTime":"2026-02-23T10:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.005571 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ad99ed9-56d8-464c-94ce-e861240dd0a5-metrics-certs\") pod \"network-metrics-daemon-rmw4r\" (UID: \"3ad99ed9-56d8-464c-94ce-e861240dd0a5\") " pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.005633 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn9mm\" (UniqueName: \"kubernetes.io/projected/3ad99ed9-56d8-464c-94ce-e861240dd0a5-kube-api-access-xn9mm\") pod \"network-metrics-daemon-rmw4r\" (UID: \"3ad99ed9-56d8-464c-94ce-e861240dd0a5\") " pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.007218 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e66580d6-ac94-4391-ae25-c2a99a3cf3bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j68r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j68r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cnx2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.018836 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.028817 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.038905 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.049374 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.059422 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xrpzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4aa737d0b692c2a8cd62624d4e51f6105499e49a967a4c54a7ee2226a480e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g98lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xrpzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.070850 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c51a5e1dbcc617624918a80e94145376683414f7e43429e95e657b69b9ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.089296 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.101579 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.106109 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn9mm\" (UniqueName: \"kubernetes.io/projected/3ad99ed9-56d8-464c-94ce-e861240dd0a5-kube-api-access-xn9mm\") pod \"network-metrics-daemon-rmw4r\" (UID: \"3ad99ed9-56d8-464c-94ce-e861240dd0a5\") " pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.106210 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.106231 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ad99ed9-56d8-464c-94ce-e861240dd0a5-metrics-certs\") pod \"network-metrics-daemon-rmw4r\" (UID: \"3ad99ed9-56d8-464c-94ce-e861240dd0a5\") " pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.106234 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.106340 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:00 crc kubenswrapper[4904]: E0223 10:08:00.106420 4904 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 10:08:00 crc kubenswrapper[4904]: E0223 10:08:00.106472 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ad99ed9-56d8-464c-94ce-e861240dd0a5-metrics-certs podName:3ad99ed9-56d8-464c-94ce-e861240dd0a5 nodeName:}" failed. No retries permitted until 2026-02-23 10:08:00.606453385 +0000 UTC m=+114.026826898 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3ad99ed9-56d8-464c-94ce-e861240dd0a5-metrics-certs") pod "network-metrics-daemon-rmw4r" (UID: "3ad99ed9-56d8-464c-94ce-e861240dd0a5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.106424 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.106496 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:00Z","lastTransitionTime":"2026-02-23T10:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.115110 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.121342 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn9mm\" (UniqueName: \"kubernetes.io/projected/3ad99ed9-56d8-464c-94ce-e861240dd0a5-kube-api-access-xn9mm\") pod \"network-metrics-daemon-rmw4r\" (UID: \"3ad99ed9-56d8-464c-94ce-e861240dd0a5\") " pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.128798 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.141329 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.154535 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.208850 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.208895 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.208904 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.208916 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.208926 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:00Z","lastTransitionTime":"2026-02-23T10:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.311338 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.311370 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.311394 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.311408 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.311417 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:00Z","lastTransitionTime":"2026-02-23T10:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.382981 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:44:23.694845326 +0000 UTC Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.414110 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.414140 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.414151 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.414166 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.414176 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:00Z","lastTransitionTime":"2026-02-23T10:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.517123 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.517167 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.517178 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.517194 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.517207 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:00Z","lastTransitionTime":"2026-02-23T10:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.547286 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" event={"ID":"e66580d6-ac94-4391-ae25-c2a99a3cf3bd","Type":"ContainerStarted","Data":"f1e8ef66f990d4cfc70a071d30a4b9e3394ae7eb8a2f371d3a05ff3b6803301d"} Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.547341 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" event={"ID":"e66580d6-ac94-4391-ae25-c2a99a3cf3bd","Type":"ContainerStarted","Data":"1997a298c55f1ada0efdb4c50de3589c72a9287cd783a32c87b3b9899a835863"} Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.579094 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.596120 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.611203 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ad99ed9-56d8-464c-94ce-e861240dd0a5-metrics-certs\") pod \"network-metrics-daemon-rmw4r\" (UID: \"3ad99ed9-56d8-464c-94ce-e861240dd0a5\") " pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:00 crc kubenswrapper[4904]: E0223 10:08:00.611432 4904 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 10:08:00 crc kubenswrapper[4904]: E0223 10:08:00.611508 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ad99ed9-56d8-464c-94ce-e861240dd0a5-metrics-certs podName:3ad99ed9-56d8-464c-94ce-e861240dd0a5 nodeName:}" failed. No retries permitted until 2026-02-23 10:08:01.611485289 +0000 UTC m=+115.031858822 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3ad99ed9-56d8-464c-94ce-e861240dd0a5-metrics-certs") pod "network-metrics-daemon-rmw4r" (UID: "3ad99ed9-56d8-464c-94ce-e861240dd0a5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.613068 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.619327 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.619368 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.619376 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.619388 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.619397 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:00Z","lastTransitionTime":"2026-02-23T10:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.622986 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.634777 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.647334 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.661364 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c51a5e1dbcc617624918a80e94145376683414f7e43429e95e657b69b9ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.671838 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.689145 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.713563 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0ec9aed0be046bb183d9bf1b6bc4e55a5912101b6948834faa89f5f8b781b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed0ec9aed0be046bb183d9bf1b6bc4e55a5912101b6948834faa89f5f8b781b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T10:07:57Z\\\",\\\"message\\\":\\\"rio-crc after 0 failed attempt(s)\\\\nI0223 10:07:57.246289 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0223 10:07:57.246312 6783 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0223 10:07:57.246279 6783 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0223 10:07:57.246268 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-fm2n2 after 0 failed attempt(s)\\\\nI0223 10:07:57.246327 6783 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-fm2n2\\\\nI0223 10:07:57.246314 6783 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0223 10:07:57.246305 6783 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0223 10:07:57.246345 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0223 10:07:57.246349 6783 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0223 10:07:57.246168 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9h7jb_openshift-ovn-kubernetes(0acf61bd-42c5-4566-ac29-815afead2012)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.721220 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.721269 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.721285 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.721310 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.721327 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:00Z","lastTransitionTime":"2026-02-23T10:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.727756 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rmw4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ad99ed9-56d8-464c-94ce-e861240dd0a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn9mm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn9mm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rmw4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.740301 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e66580d6-ac94-4391-ae25-c2a99a3cf3bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1997a298c55f1ada0efdb4c50de3589c72a9287cd783a32c87b3b9899a835863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j68r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e8ef66f990d4cfc70a071d30a4b9e3394ae7eb8a2f371d3a05ff3b6803301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j68r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cnx2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.751388 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.762202 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.773217 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.788925 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.800286 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xrpzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4aa737d0b692c2a8cd62624d4e51f6105499e49a967a4c54a7ee2226a480e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g98lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xrpzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:00Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.823050 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.823092 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.823103 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.823117 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.823126 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:00Z","lastTransitionTime":"2026-02-23T10:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.925634 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.925745 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.925772 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.925804 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:00 crc kubenswrapper[4904]: I0223 10:08:00.925831 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:00Z","lastTransitionTime":"2026-02-23T10:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.032691 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.032741 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.032750 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.032764 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.032773 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:01Z","lastTransitionTime":"2026-02-23T10:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.136426 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.136492 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.136509 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.136536 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.136553 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:01Z","lastTransitionTime":"2026-02-23T10:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.239415 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.239686 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.239876 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.240022 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.240157 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:01Z","lastTransitionTime":"2026-02-23T10:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.255876 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.255904 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:01 crc kubenswrapper[4904]: E0223 10:08:01.255984 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.256087 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:01 crc kubenswrapper[4904]: E0223 10:08:01.256276 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.256781 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:01 crc kubenswrapper[4904]: E0223 10:08:01.258418 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:08:01 crc kubenswrapper[4904]: E0223 10:08:01.256481 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.342752 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.342789 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.342801 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.342816 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.342826 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:01Z","lastTransitionTime":"2026-02-23T10:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.383621 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 06:53:24.595331131 +0000 UTC Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.445293 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.445326 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.445336 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.445350 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.445359 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:01Z","lastTransitionTime":"2026-02-23T10:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.547884 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.547933 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.547949 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.547965 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.547976 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:01Z","lastTransitionTime":"2026-02-23T10:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.622969 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ad99ed9-56d8-464c-94ce-e861240dd0a5-metrics-certs\") pod \"network-metrics-daemon-rmw4r\" (UID: \"3ad99ed9-56d8-464c-94ce-e861240dd0a5\") " pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:01 crc kubenswrapper[4904]: E0223 10:08:01.623148 4904 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 10:08:01 crc kubenswrapper[4904]: E0223 10:08:01.623212 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ad99ed9-56d8-464c-94ce-e861240dd0a5-metrics-certs podName:3ad99ed9-56d8-464c-94ce-e861240dd0a5 nodeName:}" failed. No retries permitted until 2026-02-23 10:08:03.623193451 +0000 UTC m=+117.043566964 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3ad99ed9-56d8-464c-94ce-e861240dd0a5-metrics-certs") pod "network-metrics-daemon-rmw4r" (UID: "3ad99ed9-56d8-464c-94ce-e861240dd0a5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.649529 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.649559 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.649568 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.649604 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.649613 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:01Z","lastTransitionTime":"2026-02-23T10:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.752064 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.752106 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.752120 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.752135 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.752147 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:01Z","lastTransitionTime":"2026-02-23T10:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.854334 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.854405 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.854447 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.854476 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.854497 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:01Z","lastTransitionTime":"2026-02-23T10:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.957061 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.957095 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.957103 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.957117 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:01 crc kubenswrapper[4904]: I0223 10:08:01.957125 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:01Z","lastTransitionTime":"2026-02-23T10:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.058549 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.058578 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.058586 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.058619 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.058628 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:02Z","lastTransitionTime":"2026-02-23T10:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.160619 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.160677 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.160686 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.160707 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.160735 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:02Z","lastTransitionTime":"2026-02-23T10:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.263111 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.263167 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.263182 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.263198 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.263208 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:02Z","lastTransitionTime":"2026-02-23T10:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.364882 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.364911 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.364919 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.364932 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.364944 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:02Z","lastTransitionTime":"2026-02-23T10:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.384610 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 00:46:58.904213286 +0000 UTC Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.466465 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.466545 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.466557 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.466573 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.466585 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:02Z","lastTransitionTime":"2026-02-23T10:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.573855 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.573913 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.573924 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.573942 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.573953 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:02Z","lastTransitionTime":"2026-02-23T10:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.676218 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.676273 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.676284 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.676300 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.676310 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:02Z","lastTransitionTime":"2026-02-23T10:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.779156 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.779217 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.779231 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.779253 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.779268 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:02Z","lastTransitionTime":"2026-02-23T10:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.882690 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.882952 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.882962 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.882976 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.882985 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:02Z","lastTransitionTime":"2026-02-23T10:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.986994 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.987303 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.987380 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.987453 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:02 crc kubenswrapper[4904]: I0223 10:08:02.987522 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:02Z","lastTransitionTime":"2026-02-23T10:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.089873 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.089935 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.089946 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.089976 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.089989 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:03Z","lastTransitionTime":"2026-02-23T10:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.193249 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.193317 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.193329 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.193358 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.193373 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:03Z","lastTransitionTime":"2026-02-23T10:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.255329 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.255429 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:03 crc kubenswrapper[4904]: E0223 10:08:03.256008 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.255522 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.255438 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:03 crc kubenswrapper[4904]: E0223 10:08:03.256276 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:08:03 crc kubenswrapper[4904]: E0223 10:08:03.256358 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:03 crc kubenswrapper[4904]: E0223 10:08:03.256510 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.296708 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.297329 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.297866 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.298346 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.298793 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:03Z","lastTransitionTime":"2026-02-23T10:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.384891 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 22:41:01.487394059 +0000 UTC Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.402556 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.402602 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.402619 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.402645 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.402663 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:03Z","lastTransitionTime":"2026-02-23T10:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.505932 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.505971 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.505983 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.505997 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.506033 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:03Z","lastTransitionTime":"2026-02-23T10:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.609573 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.609919 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.610060 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.610159 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.610252 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:03Z","lastTransitionTime":"2026-02-23T10:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.648699 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ad99ed9-56d8-464c-94ce-e861240dd0a5-metrics-certs\") pod \"network-metrics-daemon-rmw4r\" (UID: \"3ad99ed9-56d8-464c-94ce-e861240dd0a5\") " pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:03 crc kubenswrapper[4904]: E0223 10:08:03.649019 4904 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 10:08:03 crc kubenswrapper[4904]: E0223 10:08:03.649167 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ad99ed9-56d8-464c-94ce-e861240dd0a5-metrics-certs podName:3ad99ed9-56d8-464c-94ce-e861240dd0a5 nodeName:}" failed. No retries permitted until 2026-02-23 10:08:07.649149009 +0000 UTC m=+121.069522522 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3ad99ed9-56d8-464c-94ce-e861240dd0a5-metrics-certs") pod "network-metrics-daemon-rmw4r" (UID: "3ad99ed9-56d8-464c-94ce-e861240dd0a5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.713512 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.715005 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.715211 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.715398 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.715563 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:03Z","lastTransitionTime":"2026-02-23T10:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.758432 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.758470 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.758481 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.758498 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.758511 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:03Z","lastTransitionTime":"2026-02-23T10:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:03 crc kubenswrapper[4904]: E0223 10:08:03.782785 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.790264 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.790577 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.790823 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.791055 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.791238 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:03Z","lastTransitionTime":"2026-02-23T10:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:03 crc kubenswrapper[4904]: E0223 10:08:03.813085 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.819277 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.819356 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.819383 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.819422 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.819447 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:03Z","lastTransitionTime":"2026-02-23T10:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:03 crc kubenswrapper[4904]: E0223 10:08:03.842089 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.848205 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.848283 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.848310 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.848345 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.848371 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:03Z","lastTransitionTime":"2026-02-23T10:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:03 crc kubenswrapper[4904]: E0223 10:08:03.871380 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.876996 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.877080 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.877111 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.877144 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.877167 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:03Z","lastTransitionTime":"2026-02-23T10:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:03 crc kubenswrapper[4904]: E0223 10:08:03.892435 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:03 crc kubenswrapper[4904]: E0223 10:08:03.892558 4904 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.894793 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.894887 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.894908 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.894940 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.894962 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:03Z","lastTransitionTime":"2026-02-23T10:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.998693 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.998856 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.998875 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.998902 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:03 crc kubenswrapper[4904]: I0223 10:08:03.998985 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:03Z","lastTransitionTime":"2026-02-23T10:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.102706 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.103147 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.103341 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.103524 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.103727 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:04Z","lastTransitionTime":"2026-02-23T10:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.206234 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.206267 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.206275 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.206290 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.206299 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:04Z","lastTransitionTime":"2026-02-23T10:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.310210 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.310271 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.310288 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.310311 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.310333 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:04Z","lastTransitionTime":"2026-02-23T10:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.385462 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 07:06:26.626205741 +0000 UTC Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.412908 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.412964 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.412981 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.413005 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.413022 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:04Z","lastTransitionTime":"2026-02-23T10:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.516040 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.516081 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.516093 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.516109 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.516124 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:04Z","lastTransitionTime":"2026-02-23T10:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.620149 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.620364 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.620399 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.620479 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.620539 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:04Z","lastTransitionTime":"2026-02-23T10:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.725498 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.725553 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.725565 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.725589 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.725604 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:04Z","lastTransitionTime":"2026-02-23T10:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.829671 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.829824 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.829853 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.829883 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.829902 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:04Z","lastTransitionTime":"2026-02-23T10:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.933472 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.933519 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.933529 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.933547 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:04 crc kubenswrapper[4904]: I0223 10:08:04.933558 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:04Z","lastTransitionTime":"2026-02-23T10:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.037402 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.037499 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.037520 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.037549 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.037569 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:05Z","lastTransitionTime":"2026-02-23T10:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.141535 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.141684 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.141707 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.141793 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.141819 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:05Z","lastTransitionTime":"2026-02-23T10:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.244770 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.244854 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.244883 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.244914 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.244933 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:05Z","lastTransitionTime":"2026-02-23T10:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.255458 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.255534 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.255459 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:05 crc kubenswrapper[4904]: E0223 10:08:05.255654 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:08:05 crc kubenswrapper[4904]: E0223 10:08:05.256007 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:08:05 crc kubenswrapper[4904]: E0223 10:08:05.256069 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.256821 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:05 crc kubenswrapper[4904]: E0223 10:08:05.257005 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.348009 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.348071 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.348088 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.348112 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.348131 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:05Z","lastTransitionTime":"2026-02-23T10:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.386652 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 23:11:03.550293601 +0000 UTC Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.474234 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.474296 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.474310 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.474327 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.474340 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:05Z","lastTransitionTime":"2026-02-23T10:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.576211 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.576253 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.576264 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.576280 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.576293 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:05Z","lastTransitionTime":"2026-02-23T10:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.678230 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.678267 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.678282 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.678296 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.678305 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:05Z","lastTransitionTime":"2026-02-23T10:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.780786 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.780842 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.780857 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.780880 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.780898 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:05Z","lastTransitionTime":"2026-02-23T10:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.883160 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.883195 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.883206 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.883221 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.883231 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:05Z","lastTransitionTime":"2026-02-23T10:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.986177 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.986221 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.986234 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.986248 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:05 crc kubenswrapper[4904]: I0223 10:08:05.986259 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:05Z","lastTransitionTime":"2026-02-23T10:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.089303 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.089368 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.089381 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.089403 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.089416 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:06Z","lastTransitionTime":"2026-02-23T10:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.192211 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.192262 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.192276 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.192295 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.192309 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:06Z","lastTransitionTime":"2026-02-23T10:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.295931 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.295976 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.295986 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.296000 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.296010 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:06Z","lastTransitionTime":"2026-02-23T10:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.387187 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 02:33:46.76787315 +0000 UTC Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.399226 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.399295 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.399306 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.399325 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.399335 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:06Z","lastTransitionTime":"2026-02-23T10:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.501898 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.501962 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.501977 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.502001 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.502016 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:06Z","lastTransitionTime":"2026-02-23T10:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.603959 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.603993 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.604002 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.604016 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.604027 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:06Z","lastTransitionTime":"2026-02-23T10:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.706623 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.706658 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.706673 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.706688 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.706699 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:06Z","lastTransitionTime":"2026-02-23T10:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.808306 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.808332 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.808341 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.808355 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.808365 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:06Z","lastTransitionTime":"2026-02-23T10:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.911638 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.911697 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.911709 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.911768 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:06 crc kubenswrapper[4904]: I0223 10:08:06.911782 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:06Z","lastTransitionTime":"2026-02-23T10:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.014527 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.014559 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.014568 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.014580 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.014589 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:07Z","lastTransitionTime":"2026-02-23T10:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.119136 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.119211 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.119232 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.119264 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.119291 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:07Z","lastTransitionTime":"2026-02-23T10:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:07 crc kubenswrapper[4904]: E0223 10:08:07.220582 4904 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.255052 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:07 crc kubenswrapper[4904]: E0223 10:08:07.255202 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.255069 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.255273 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:07 crc kubenswrapper[4904]: E0223 10:08:07.255404 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:08:07 crc kubenswrapper[4904]: E0223 10:08:07.255499 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.255629 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:07 crc kubenswrapper[4904]: E0223 10:08:07.255714 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.269887 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:07Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.285073 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:07Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.303757 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0ec9aed0be046bb183d9bf1b6bc4e55a5912101b6948834faa89f5f8b781b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed0ec9aed0be046bb183d9bf1b6bc4e55a5912101b6948834faa89f5f8b781b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T10:07:57Z\\\",\\\"message\\\":\\\"rio-crc after 0 failed attempt(s)\\\\nI0223 10:07:57.246289 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0223 10:07:57.246312 6783 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0223 10:07:57.246279 6783 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0223 10:07:57.246268 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-fm2n2 after 0 failed attempt(s)\\\\nI0223 10:07:57.246327 6783 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-fm2n2\\\\nI0223 10:07:57.246314 6783 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0223 10:07:57.246305 6783 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0223 10:07:57.246345 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0223 10:07:57.246349 6783 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0223 10:07:57.246168 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9h7jb_openshift-ovn-kubernetes(0acf61bd-42c5-4566-ac29-815afead2012)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:07Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.317265 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rmw4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ad99ed9-56d8-464c-94ce-e861240dd0a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn9mm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn9mm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rmw4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:07Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.331492 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e66580d6-ac94-4391-ae25-c2a99a3cf3bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1997a298c55f1ada0efdb4c50de3589c72a9287cd783a32c87b3b9899a835863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j68r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e8ef66f990d4cfc70a071d30a4b9e3394ae7eb8a2f371d3a05ff3b6803301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j68r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cnx2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:07Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.344370 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xrpzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4aa737d0b692c2a8cd62624d4e51f6105499e49a967a4c54a7ee2226a480e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g98lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xrpzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:07Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:07 crc kubenswrapper[4904]: E0223 10:08:07.354062 4904 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.359771 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:07Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.372145 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:07Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.387779 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 09:27:11.649569031 +0000 UTC Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.388886 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:07Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.405272 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:07Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.422363 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:07Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.437115 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:07Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.449684 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:07Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.468614 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c51a5e1dbcc617624918a80e94145376683414f7e43429e95e657b69b9ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:07Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.488562 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:07Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.504338 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:07Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.518874 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:07Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:07 crc kubenswrapper[4904]: I0223 10:08:07.695081 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ad99ed9-56d8-464c-94ce-e861240dd0a5-metrics-certs\") pod \"network-metrics-daemon-rmw4r\" (UID: \"3ad99ed9-56d8-464c-94ce-e861240dd0a5\") " pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:07 crc kubenswrapper[4904]: E0223 10:08:07.695219 4904 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 10:08:07 crc kubenswrapper[4904]: E0223 10:08:07.695273 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ad99ed9-56d8-464c-94ce-e861240dd0a5-metrics-certs podName:3ad99ed9-56d8-464c-94ce-e861240dd0a5 nodeName:}" failed. No retries permitted until 2026-02-23 10:08:15.695258691 +0000 UTC m=+129.115632204 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3ad99ed9-56d8-464c-94ce-e861240dd0a5-metrics-certs") pod "network-metrics-daemon-rmw4r" (UID: "3ad99ed9-56d8-464c-94ce-e861240dd0a5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 10:08:08 crc kubenswrapper[4904]: I0223 10:08:08.388453 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 14:51:02.024264709 +0000 UTC Feb 23 10:08:09 crc kubenswrapper[4904]: I0223 10:08:09.254769 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:09 crc kubenswrapper[4904]: I0223 10:08:09.254815 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:09 crc kubenswrapper[4904]: I0223 10:08:09.254775 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:09 crc kubenswrapper[4904]: E0223 10:08:09.254969 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:08:09 crc kubenswrapper[4904]: E0223 10:08:09.255060 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:08:09 crc kubenswrapper[4904]: E0223 10:08:09.255227 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:09 crc kubenswrapper[4904]: I0223 10:08:09.255402 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:09 crc kubenswrapper[4904]: E0223 10:08:09.255802 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:08:09 crc kubenswrapper[4904]: I0223 10:08:09.389772 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 08:51:28.743581175 +0000 UTC Feb 23 10:08:10 crc kubenswrapper[4904]: I0223 10:08:10.390874 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 19:54:39.846421186 +0000 UTC Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.254344 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.254404 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:11 crc kubenswrapper[4904]: E0223 10:08:11.254542 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.254344 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.255552 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:11 crc kubenswrapper[4904]: E0223 10:08:11.255700 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.256295 4904 scope.go:117] "RemoveContainer" containerID="ed0ec9aed0be046bb183d9bf1b6bc4e55a5912101b6948834faa89f5f8b781b9" Feb 23 10:08:11 crc kubenswrapper[4904]: E0223 10:08:11.256299 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:08:11 crc kubenswrapper[4904]: E0223 10:08:11.256374 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.391781 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 00:23:05.693564385 +0000 UTC Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.589358 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9h7jb_0acf61bd-42c5-4566-ac29-815afead2012/ovnkube-controller/1.log" Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.594381 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" event={"ID":"0acf61bd-42c5-4566-ac29-815afead2012","Type":"ContainerStarted","Data":"94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7"} Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.594774 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.628299 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed0ec9aed0be046bb183d9bf1b6bc4e55a5912101b6948834faa89f5f8b781b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T10:07:57Z\\\",\\\"message\\\":\\\"rio-crc after 0 failed attempt(s)\\\\nI0223 10:07:57.246289 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0223 10:07:57.246312 6783 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0223 10:07:57.246279 6783 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0223 10:07:57.246268 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-fm2n2 after 0 failed attempt(s)\\\\nI0223 10:07:57.246327 6783 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-fm2n2\\\\nI0223 10:07:57.246314 6783 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0223 10:07:57.246305 6783 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0223 10:07:57.246345 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0223 10:07:57.246349 6783 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0223 10:07:57.246168 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:11Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.645704 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rmw4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ad99ed9-56d8-464c-94ce-e861240dd0a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn9mm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn9mm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rmw4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:11Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.658514 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:11Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.674444 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:11Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.694947 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e66580d6-ac94-4391-ae25-c2a99a3cf3bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1997a298c55f1ada0efdb4c50de3589c72a9287cd783a32c87b3b9899a835863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j68r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e8ef66f990d4cfc70a071d30a4b9e3394ae7eb8a2f371d3a05ff3b6803301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j68r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cnx2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:11Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.714338 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:11Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.731669 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:11Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.742256 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xrpzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4aa737d0b692c2a8cd62624d4e51f6105499e49a967a4c54a7ee2226a480e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g98lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xrpzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:11Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.769758 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:11Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.788937 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:11Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.807593 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:11Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.822481 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:11Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.834109 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:11Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.845802 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:11Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.859129 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c51a5e1dbcc617624918a80e94145376683414f7e43429e95e657b69b9ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:11Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.879424 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:11Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:11 crc kubenswrapper[4904]: I0223 10:08:11.894941 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:11Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:12 crc kubenswrapper[4904]: E0223 10:08:12.355037 4904 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 10:08:12 crc kubenswrapper[4904]: I0223 10:08:12.392940 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 02:29:50.313364731 +0000 UTC Feb 23 10:08:12 crc kubenswrapper[4904]: I0223 10:08:12.600668 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9h7jb_0acf61bd-42c5-4566-ac29-815afead2012/ovnkube-controller/2.log" Feb 23 10:08:12 crc kubenswrapper[4904]: I0223 10:08:12.602005 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9h7jb_0acf61bd-42c5-4566-ac29-815afead2012/ovnkube-controller/1.log" Feb 23 10:08:12 crc kubenswrapper[4904]: I0223 10:08:12.605426 4904 generic.go:334] "Generic (PLEG): container finished" podID="0acf61bd-42c5-4566-ac29-815afead2012" containerID="94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7" exitCode=1 Feb 23 10:08:12 crc kubenswrapper[4904]: I0223 10:08:12.605475 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" event={"ID":"0acf61bd-42c5-4566-ac29-815afead2012","Type":"ContainerDied","Data":"94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7"} Feb 23 10:08:12 crc kubenswrapper[4904]: I0223 10:08:12.605509 4904 scope.go:117] "RemoveContainer" containerID="ed0ec9aed0be046bb183d9bf1b6bc4e55a5912101b6948834faa89f5f8b781b9" Feb 23 10:08:12 crc kubenswrapper[4904]: I0223 10:08:12.606313 4904 scope.go:117] "RemoveContainer" containerID="94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7" Feb 23 10:08:12 crc kubenswrapper[4904]: E0223 10:08:12.606654 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9h7jb_openshift-ovn-kubernetes(0acf61bd-42c5-4566-ac29-815afead2012)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" podUID="0acf61bd-42c5-4566-ac29-815afead2012" Feb 23 10:08:12 crc kubenswrapper[4904]: I0223 10:08:12.625969 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:12Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:12 crc kubenswrapper[4904]: I0223 10:08:12.645209 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:12Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:12 crc kubenswrapper[4904]: I0223 10:08:12.667523 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed0ec9aed0be046bb183d9bf1b6bc4e55a5912101b6948834faa89f5f8b781b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T10:07:57Z\\\",\\\"message\\\":\\\"rio-crc after 0 failed attempt(s)\\\\nI0223 10:07:57.246289 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0223 10:07:57.246312 6783 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0223 10:07:57.246279 6783 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0223 10:07:57.246268 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-fm2n2 after 0 failed attempt(s)\\\\nI0223 10:07:57.246327 6783 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-fm2n2\\\\nI0223 10:07:57.246314 6783 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0223 10:07:57.246305 6783 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0223 10:07:57.246345 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0223 10:07:57.246349 6783 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0223 10:07:57.246168 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T10:08:12Z\\\",\\\"message\\\":\\\"bm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:catalog-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0072d6ce7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https-metrics,Protocol:TCP,Port:8443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: catalog-operator,},ClusterIP:10.217.5.204,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.204],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0223 10:08:12.160211 7034 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:12Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:12 crc kubenswrapper[4904]: I0223 10:08:12.682542 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rmw4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ad99ed9-56d8-464c-94ce-e861240dd0a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn9mm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn9mm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rmw4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:12Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:12 crc kubenswrapper[4904]: I0223 10:08:12.696129 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e66580d6-ac94-4391-ae25-c2a99a3cf3bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1997a298c55f1ada0efdb4c50de3589c72a9287cd783a32c87b3b9899a835863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j68r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e8ef66f990d4cfc70a071d30a4b9e3394ae7eb8a2f371d3a05ff3b6803301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j68r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cnx2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:12Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:12 crc kubenswrapper[4904]: I0223 10:08:12.714524 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:12Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:12 crc kubenswrapper[4904]: I0223 10:08:12.728352 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:12Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:12 crc kubenswrapper[4904]: I0223 10:08:12.743035 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:12Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:12 crc kubenswrapper[4904]: I0223 10:08:12.766340 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:12Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:12 crc kubenswrapper[4904]: I0223 10:08:12.779658 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xrpzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4aa737d0b692c2a8cd62624d4e51f6105499e49a967a4c54a7ee2226a480e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g98lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xrpzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:12Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:12 crc kubenswrapper[4904]: I0223 10:08:12.794836 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c51a5e1dbcc617624918a80e94145376683414f7e43429e95e657b69b9ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:12Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:12 crc kubenswrapper[4904]: I0223 10:08:12.814893 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:12Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:12 crc kubenswrapper[4904]: I0223 10:08:12.830167 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:12Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:12 crc kubenswrapper[4904]: I0223 10:08:12.845213 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:12Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:12 crc kubenswrapper[4904]: I0223 10:08:12.857650 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:12Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:12 crc kubenswrapper[4904]: I0223 10:08:12.869651 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:12Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:12 crc kubenswrapper[4904]: I0223 10:08:12.880655 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:12Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:13 crc kubenswrapper[4904]: I0223 10:08:13.254661 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:13 crc kubenswrapper[4904]: I0223 10:08:13.254751 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:13 crc kubenswrapper[4904]: I0223 10:08:13.254778 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:13 crc kubenswrapper[4904]: I0223 10:08:13.254686 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:13 crc kubenswrapper[4904]: E0223 10:08:13.254886 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:13 crc kubenswrapper[4904]: E0223 10:08:13.254971 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:08:13 crc kubenswrapper[4904]: E0223 10:08:13.255084 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:08:13 crc kubenswrapper[4904]: E0223 10:08:13.255187 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:08:13 crc kubenswrapper[4904]: I0223 10:08:13.393894 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 06:00:39.472561576 +0000 UTC Feb 23 10:08:13 crc kubenswrapper[4904]: I0223 10:08:13.610620 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9h7jb_0acf61bd-42c5-4566-ac29-815afead2012/ovnkube-controller/2.log" Feb 23 10:08:13 crc kubenswrapper[4904]: I0223 10:08:13.614301 4904 scope.go:117] "RemoveContainer" containerID="94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7" Feb 23 10:08:13 crc kubenswrapper[4904]: E0223 10:08:13.614502 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9h7jb_openshift-ovn-kubernetes(0acf61bd-42c5-4566-ac29-815afead2012)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" podUID="0acf61bd-42c5-4566-ac29-815afead2012" Feb 23 10:08:13 crc kubenswrapper[4904]: I0223 10:08:13.625136 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rmw4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ad99ed9-56d8-464c-94ce-e861240dd0a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn9mm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn9mm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rmw4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:13 crc kubenswrapper[4904]: I0223 10:08:13.639331 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:13 crc kubenswrapper[4904]: I0223 10:08:13.651610 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:13 crc kubenswrapper[4904]: I0223 10:08:13.670458 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T10:08:12Z\\\",\\\"message\\\":\\\"bm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:catalog-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0072d6ce7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https-metrics,Protocol:TCP,Port:8443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: catalog-operator,},ClusterIP:10.217.5.204,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.204],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0223 10:08:12.160211 7034 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:08:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9h7jb_openshift-ovn-kubernetes(0acf61bd-42c5-4566-ac29-815afead2012)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:13 crc kubenswrapper[4904]: I0223 10:08:13.680344 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e66580d6-ac94-4391-ae25-c2a99a3cf3bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1997a298c55f1ada0efdb4c50de3589c72a9287cd783a32c87b3b9899a835863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j68r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e8ef66f990d4cfc70a071d30a4b9e3394ae7eb8a2f371d3a05ff3b6803301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j68r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cnx2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:13 crc kubenswrapper[4904]: I0223 10:08:13.692298 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:13 crc kubenswrapper[4904]: I0223 10:08:13.734021 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xrpzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4aa737d0b692c2a8cd62624d4e51f6105499e49a967a4c54a7ee2226a480e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g98lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xrpzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:13 crc kubenswrapper[4904]: I0223 10:08:13.755419 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:13 crc kubenswrapper[4904]: I0223 10:08:13.768853 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:13 crc kubenswrapper[4904]: I0223 10:08:13.785873 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:13 crc kubenswrapper[4904]: I0223 10:08:13.799418 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:13 crc kubenswrapper[4904]: I0223 10:08:13.813085 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:13 crc kubenswrapper[4904]: I0223 10:08:13.823623 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:13 crc kubenswrapper[4904]: I0223 10:08:13.835431 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:13 crc kubenswrapper[4904]: I0223 10:08:13.851169 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c51a5e1dbcc617624918a80e94145376683414f7e43429e95e657b69b9ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:13 crc kubenswrapper[4904]: I0223 10:08:13.873880 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:13 crc kubenswrapper[4904]: I0223 10:08:13.886705 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.133796 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.133836 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.133845 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.133860 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.133868 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:14Z","lastTransitionTime":"2026-02-23T10:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:14 crc kubenswrapper[4904]: E0223 10:08:14.148949 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:14Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.152411 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.152428 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.152437 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.152451 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.152459 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:14Z","lastTransitionTime":"2026-02-23T10:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:14 crc kubenswrapper[4904]: E0223 10:08:14.165065 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:14Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.168808 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.168840 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.168848 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.168861 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.168869 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:14Z","lastTransitionTime":"2026-02-23T10:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:14 crc kubenswrapper[4904]: E0223 10:08:14.180516 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:14Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.184117 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.184145 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.184154 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.184170 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.184182 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:14Z","lastTransitionTime":"2026-02-23T10:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:14 crc kubenswrapper[4904]: E0223 10:08:14.199037 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:14Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.202582 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.202624 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.202633 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.202646 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.202657 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:14Z","lastTransitionTime":"2026-02-23T10:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:14 crc kubenswrapper[4904]: E0223 10:08:14.216823 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:14Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:14 crc kubenswrapper[4904]: E0223 10:08:14.216944 4904 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 10:08:14 crc kubenswrapper[4904]: I0223 10:08:14.394661 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 03:43:40.06171414 +0000 UTC Feb 23 10:08:15 crc kubenswrapper[4904]: I0223 10:08:15.255510 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:15 crc kubenswrapper[4904]: I0223 10:08:15.255605 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:15 crc kubenswrapper[4904]: E0223 10:08:15.255826 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:15 crc kubenswrapper[4904]: I0223 10:08:15.255852 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:15 crc kubenswrapper[4904]: I0223 10:08:15.255860 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:15 crc kubenswrapper[4904]: E0223 10:08:15.255942 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:08:15 crc kubenswrapper[4904]: E0223 10:08:15.256003 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:08:15 crc kubenswrapper[4904]: E0223 10:08:15.256132 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:08:15 crc kubenswrapper[4904]: I0223 10:08:15.395645 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 03:23:43.829946783 +0000 UTC Feb 23 10:08:15 crc kubenswrapper[4904]: I0223 10:08:15.788781 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ad99ed9-56d8-464c-94ce-e861240dd0a5-metrics-certs\") pod \"network-metrics-daemon-rmw4r\" (UID: \"3ad99ed9-56d8-464c-94ce-e861240dd0a5\") " pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:15 crc kubenswrapper[4904]: E0223 10:08:15.788935 4904 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 10:08:15 crc kubenswrapper[4904]: E0223 10:08:15.788990 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ad99ed9-56d8-464c-94ce-e861240dd0a5-metrics-certs podName:3ad99ed9-56d8-464c-94ce-e861240dd0a5 nodeName:}" failed. No retries permitted until 2026-02-23 10:08:31.788973126 +0000 UTC m=+145.209346639 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3ad99ed9-56d8-464c-94ce-e861240dd0a5-metrics-certs") pod "network-metrics-daemon-rmw4r" (UID: "3ad99ed9-56d8-464c-94ce-e861240dd0a5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 10:08:16 crc kubenswrapper[4904]: I0223 10:08:16.396496 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:17:10.240197001 +0000 UTC Feb 23 10:08:17 crc kubenswrapper[4904]: I0223 10:08:17.254395 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:17 crc kubenswrapper[4904]: I0223 10:08:17.254458 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:17 crc kubenswrapper[4904]: I0223 10:08:17.254518 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:17 crc kubenswrapper[4904]: E0223 10:08:17.254617 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:08:17 crc kubenswrapper[4904]: I0223 10:08:17.254638 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:17 crc kubenswrapper[4904]: E0223 10:08:17.254782 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:08:17 crc kubenswrapper[4904]: E0223 10:08:17.254887 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:17 crc kubenswrapper[4904]: E0223 10:08:17.255004 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:08:17 crc kubenswrapper[4904]: I0223 10:08:17.269909 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e66580d6-ac94-4391-ae25-c2a99a3cf3bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1997a298c55f1ada0efdb4c50de3589c72a9287cd783a32c87b3b9899a835863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j68r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e8ef66f990d4cfc70a071d30a4b9e3394ae7eb8a2f371d3a05ff3b6803301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j68r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cnx2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:17Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:17 crc kubenswrapper[4904]: I0223 10:08:17.282328 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:17Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:17 crc kubenswrapper[4904]: I0223 10:08:17.296847 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:17Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:17 crc kubenswrapper[4904]: I0223 10:08:17.316538 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:17Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:17 crc kubenswrapper[4904]: I0223 10:08:17.330411 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xrpzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4aa737d0b692c2a8cd62624d4e51f6105499e49a967a4c54a7ee2226a480e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g98lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xrpzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:17Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:17 crc kubenswrapper[4904]: I0223 10:08:17.347632 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:17Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:17 crc kubenswrapper[4904]: E0223 10:08:17.355662 4904 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 10:08:17 crc kubenswrapper[4904]: I0223 10:08:17.362657 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:17Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:17 crc kubenswrapper[4904]: I0223 10:08:17.376620 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:17Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:17 crc kubenswrapper[4904]: I0223 10:08:17.389760 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:17Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:17 crc kubenswrapper[4904]: I0223 10:08:17.397150 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 16:53:31.745704564 +0000 UTC Feb 23 10:08:17 crc kubenswrapper[4904]: I0223 10:08:17.402567 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:17Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:17 crc kubenswrapper[4904]: I0223 10:08:17.417374 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:17Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:17 crc kubenswrapper[4904]: I0223 10:08:17.434495 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c51a5e1dbcc617624918a80e94145376683414f7e43429e95e657b69b9ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:17Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:17 crc kubenswrapper[4904]: I0223 10:08:17.455461 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:17Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:17 crc kubenswrapper[4904]: I0223 10:08:17.468932 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:17Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:17 crc kubenswrapper[4904]: I0223 10:08:17.495656 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T10:08:12Z\\\",\\\"message\\\":\\\"bm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:catalog-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0072d6ce7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https-metrics,Protocol:TCP,Port:8443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: catalog-operator,},ClusterIP:10.217.5.204,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.204],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0223 10:08:12.160211 7034 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:08:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9h7jb_openshift-ovn-kubernetes(0acf61bd-42c5-4566-ac29-815afead2012)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:17Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:17 crc kubenswrapper[4904]: I0223 10:08:17.506305 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rmw4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ad99ed9-56d8-464c-94ce-e861240dd0a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn9mm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn9mm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rmw4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:17Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:17 crc kubenswrapper[4904]: I0223 10:08:17.514864 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:17Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:18 crc kubenswrapper[4904]: I0223 10:08:18.397500 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 16:52:15.329567769 +0000 UTC Feb 23 10:08:19 crc kubenswrapper[4904]: I0223 10:08:19.254640 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:19 crc kubenswrapper[4904]: I0223 10:08:19.254691 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:19 crc kubenswrapper[4904]: E0223 10:08:19.254861 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:19 crc kubenswrapper[4904]: I0223 10:08:19.254880 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:19 crc kubenswrapper[4904]: I0223 10:08:19.254652 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:19 crc kubenswrapper[4904]: E0223 10:08:19.254967 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:08:19 crc kubenswrapper[4904]: E0223 10:08:19.255034 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:08:19 crc kubenswrapper[4904]: E0223 10:08:19.255080 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:08:19 crc kubenswrapper[4904]: I0223 10:08:19.397908 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 10:54:24.08349555 +0000 UTC Feb 23 10:08:20 crc kubenswrapper[4904]: I0223 10:08:20.398332 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 05:36:12.573320527 +0000 UTC Feb 23 10:08:21 crc kubenswrapper[4904]: I0223 10:08:21.254807 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:21 crc kubenswrapper[4904]: I0223 10:08:21.254823 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:21 crc kubenswrapper[4904]: I0223 10:08:21.254826 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:21 crc kubenswrapper[4904]: E0223 10:08:21.255031 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:21 crc kubenswrapper[4904]: I0223 10:08:21.255202 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:21 crc kubenswrapper[4904]: E0223 10:08:21.255196 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:08:21 crc kubenswrapper[4904]: E0223 10:08:21.255319 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:08:21 crc kubenswrapper[4904]: E0223 10:08:21.255440 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:08:21 crc kubenswrapper[4904]: I0223 10:08:21.398954 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 08:40:48.236194017 +0000 UTC Feb 23 10:08:22 crc kubenswrapper[4904]: E0223 10:08:22.357327 4904 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 10:08:22 crc kubenswrapper[4904]: I0223 10:08:22.400076 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 13:41:43.146858184 +0000 UTC Feb 23 10:08:23 crc kubenswrapper[4904]: I0223 10:08:23.254920 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:23 crc kubenswrapper[4904]: I0223 10:08:23.255031 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:23 crc kubenswrapper[4904]: E0223 10:08:23.255104 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:23 crc kubenswrapper[4904]: E0223 10:08:23.255206 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:08:23 crc kubenswrapper[4904]: I0223 10:08:23.255327 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:23 crc kubenswrapper[4904]: E0223 10:08:23.255623 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:08:23 crc kubenswrapper[4904]: I0223 10:08:23.255661 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:23 crc kubenswrapper[4904]: E0223 10:08:23.255864 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:08:23 crc kubenswrapper[4904]: I0223 10:08:23.400590 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 16:26:40.005964398 +0000 UTC Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.401086 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 22:59:17.457047015 +0000 UTC Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.594391 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.594444 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.594454 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.594472 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.594486 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:24Z","lastTransitionTime":"2026-02-23T10:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:24 crc kubenswrapper[4904]: E0223 10:08:24.613763 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:24Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.618319 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.618373 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.618388 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.618407 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.618421 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:24Z","lastTransitionTime":"2026-02-23T10:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:24 crc kubenswrapper[4904]: E0223 10:08:24.632602 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:24Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.637999 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.638042 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.638052 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.638065 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.638075 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:24Z","lastTransitionTime":"2026-02-23T10:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:24 crc kubenswrapper[4904]: E0223 10:08:24.657838 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:24Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.662193 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.662229 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.662237 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.662252 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.662262 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:24Z","lastTransitionTime":"2026-02-23T10:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:24 crc kubenswrapper[4904]: E0223 10:08:24.680615 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:24Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.684483 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.684512 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.684521 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.684533 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:24 crc kubenswrapper[4904]: I0223 10:08:24.684541 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:24Z","lastTransitionTime":"2026-02-23T10:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:24 crc kubenswrapper[4904]: E0223 10:08:24.697534 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:24Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:24 crc kubenswrapper[4904]: E0223 10:08:24.697707 4904 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 10:08:25 crc kubenswrapper[4904]: I0223 10:08:25.254876 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:25 crc kubenswrapper[4904]: I0223 10:08:25.254945 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:25 crc kubenswrapper[4904]: I0223 10:08:25.254910 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:25 crc kubenswrapper[4904]: I0223 10:08:25.254947 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:25 crc kubenswrapper[4904]: E0223 10:08:25.255062 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:08:25 crc kubenswrapper[4904]: E0223 10:08:25.255190 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:08:25 crc kubenswrapper[4904]: E0223 10:08:25.255403 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:25 crc kubenswrapper[4904]: E0223 10:08:25.255441 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:08:25 crc kubenswrapper[4904]: I0223 10:08:25.401933 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 08:30:47.961196607 +0000 UTC Feb 23 10:08:26 crc kubenswrapper[4904]: I0223 10:08:26.096802 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:08:26 crc kubenswrapper[4904]: I0223 10:08:26.096886 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:26 crc kubenswrapper[4904]: E0223 10:08:26.097040 4904 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 10:08:26 crc kubenswrapper[4904]: E0223 10:08:26.097085 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:30.097053119 +0000 UTC m=+203.517426632 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:26 crc kubenswrapper[4904]: E0223 10:08:26.097124 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 10:09:30.097116811 +0000 UTC m=+203.517490324 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 10:08:26 crc kubenswrapper[4904]: I0223 10:08:26.198237 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:26 crc kubenswrapper[4904]: I0223 10:08:26.198312 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:26 crc kubenswrapper[4904]: I0223 10:08:26.198354 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:26 crc kubenswrapper[4904]: E0223 10:08:26.198473 4904 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 10:08:26 crc kubenswrapper[4904]: E0223 10:08:26.198541 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 10:08:26 crc kubenswrapper[4904]: E0223 10:08:26.198562 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 10:09:30.198539735 +0000 UTC m=+203.618913338 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 10:08:26 crc kubenswrapper[4904]: E0223 10:08:26.198566 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 10:08:26 crc kubenswrapper[4904]: E0223 10:08:26.198586 4904 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:08:26 crc kubenswrapper[4904]: E0223 10:08:26.198487 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 10:08:26 crc kubenswrapper[4904]: E0223 10:08:26.198647 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 10:08:26 crc kubenswrapper[4904]: E0223 10:08:26.198653 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 10:09:30.198630988 +0000 UTC m=+203.619004531 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:08:26 crc kubenswrapper[4904]: E0223 10:08:26.198660 4904 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:08:26 crc kubenswrapper[4904]: E0223 10:08:26.198746 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 10:09:30.1987073 +0000 UTC m=+203.619080813 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 10:08:26 crc kubenswrapper[4904]: I0223 10:08:26.402337 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 05:22:44.661812669 +0000 UTC Feb 23 10:08:27 crc kubenswrapper[4904]: I0223 10:08:27.254314 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:27 crc kubenswrapper[4904]: I0223 10:08:27.254356 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:27 crc kubenswrapper[4904]: I0223 10:08:27.254601 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:27 crc kubenswrapper[4904]: I0223 10:08:27.254606 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:27 crc kubenswrapper[4904]: E0223 10:08:27.254579 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:27 crc kubenswrapper[4904]: E0223 10:08:27.254746 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:08:27 crc kubenswrapper[4904]: E0223 10:08:27.255158 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:08:27 crc kubenswrapper[4904]: E0223 10:08:27.255278 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:08:27 crc kubenswrapper[4904]: I0223 10:08:27.255438 4904 scope.go:117] "RemoveContainer" containerID="94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7" Feb 23 10:08:27 crc kubenswrapper[4904]: E0223 10:08:27.255700 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9h7jb_openshift-ovn-kubernetes(0acf61bd-42c5-4566-ac29-815afead2012)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" podUID="0acf61bd-42c5-4566-ac29-815afead2012" Feb 23 10:08:27 crc kubenswrapper[4904]: I0223 10:08:27.276004 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:27Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:27 crc kubenswrapper[4904]: I0223 10:08:27.290387 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:27Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:27 crc kubenswrapper[4904]: I0223 10:08:27.301907 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:27Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:27 crc kubenswrapper[4904]: I0223 10:08:27.311847 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:27Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:27 crc kubenswrapper[4904]: I0223 10:08:27.324991 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:27Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:27 crc kubenswrapper[4904]: I0223 10:08:27.340492 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:27Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:27 crc kubenswrapper[4904]: I0223 10:08:27.354171 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c51a5e1dbcc617624918a80e94145376683414f7e43429e95e657b69b9ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:27Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:27 crc kubenswrapper[4904]: E0223 10:08:27.357661 4904 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 10:08:27 crc kubenswrapper[4904]: I0223 10:08:27.367331 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:27Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:27 crc kubenswrapper[4904]: I0223 10:08:27.385497 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:27Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:27 crc kubenswrapper[4904]: I0223 10:08:27.403138 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 17:28:35.076788355 +0000 UTC Feb 23 10:08:27 crc kubenswrapper[4904]: I0223 10:08:27.407532 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T10:08:12Z\\\",\\\"message\\\":\\\"bm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:catalog-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0072d6ce7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https-metrics,Protocol:TCP,Port:8443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: catalog-operator,},ClusterIP:10.217.5.204,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.204],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0223 10:08:12.160211 7034 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:08:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9h7jb_openshift-ovn-kubernetes(0acf61bd-42c5-4566-ac29-815afead2012)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:27Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:27 crc kubenswrapper[4904]: I0223 10:08:27.423749 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rmw4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ad99ed9-56d8-464c-94ce-e861240dd0a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn9mm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn9mm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rmw4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:27Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:27 crc kubenswrapper[4904]: I0223 10:08:27.435519 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e66580d6-ac94-4391-ae25-c2a99a3cf3bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1997a298c55f1ada0efdb4c50de3589c72a9287cd783a32c87b3b9899a835863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j68r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e8ef66f990d4cfc70a071d30a4b9e3394ae7eb8a2f371d3a05ff3b6803301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j68r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cnx2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:27Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:27 crc kubenswrapper[4904]: I0223 10:08:27.446859 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:27Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:27 crc kubenswrapper[4904]: I0223 10:08:27.455583 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:27Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:27 crc kubenswrapper[4904]: I0223 10:08:27.467407 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:27Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:27 crc kubenswrapper[4904]: I0223 10:08:27.480555 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:27Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:27 crc kubenswrapper[4904]: I0223 10:08:27.491160 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xrpzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4aa737d0b692c2a8cd62624d4e51f6105499e49a967a4c54a7ee2226a480e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g98lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xrpzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:27Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:28 crc kubenswrapper[4904]: I0223 10:08:28.403582 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 21:27:11.568947171 +0000 UTC Feb 23 10:08:29 crc kubenswrapper[4904]: I0223 10:08:29.254834 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:29 crc kubenswrapper[4904]: E0223 10:08:29.255204 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:29 crc kubenswrapper[4904]: I0223 10:08:29.254890 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:29 crc kubenswrapper[4904]: E0223 10:08:29.255262 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:08:29 crc kubenswrapper[4904]: I0223 10:08:29.254904 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:29 crc kubenswrapper[4904]: E0223 10:08:29.255325 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:08:29 crc kubenswrapper[4904]: I0223 10:08:29.254852 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:29 crc kubenswrapper[4904]: E0223 10:08:29.255389 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:08:29 crc kubenswrapper[4904]: I0223 10:08:29.404799 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 12:10:24.385078013 +0000 UTC Feb 23 10:08:30 crc kubenswrapper[4904]: I0223 10:08:30.405519 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 19:42:05.961002836 +0000 UTC Feb 23 10:08:31 crc kubenswrapper[4904]: I0223 10:08:31.255856 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:31 crc kubenswrapper[4904]: I0223 10:08:31.255856 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:31 crc kubenswrapper[4904]: I0223 10:08:31.256019 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:31 crc kubenswrapper[4904]: E0223 10:08:31.256208 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:08:31 crc kubenswrapper[4904]: I0223 10:08:31.256268 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:31 crc kubenswrapper[4904]: E0223 10:08:31.257128 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:31 crc kubenswrapper[4904]: E0223 10:08:31.257553 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:08:31 crc kubenswrapper[4904]: E0223 10:08:31.257697 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:08:31 crc kubenswrapper[4904]: I0223 10:08:31.268481 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 23 10:08:31 crc kubenswrapper[4904]: I0223 10:08:31.406502 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:36:01.408806074 +0000 UTC Feb 23 10:08:31 crc kubenswrapper[4904]: I0223 10:08:31.856041 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ad99ed9-56d8-464c-94ce-e861240dd0a5-metrics-certs\") pod \"network-metrics-daemon-rmw4r\" (UID: \"3ad99ed9-56d8-464c-94ce-e861240dd0a5\") " pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:31 crc kubenswrapper[4904]: E0223 10:08:31.856236 4904 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 10:08:31 crc kubenswrapper[4904]: E0223 10:08:31.856312 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ad99ed9-56d8-464c-94ce-e861240dd0a5-metrics-certs podName:3ad99ed9-56d8-464c-94ce-e861240dd0a5 nodeName:}" failed. No retries permitted until 2026-02-23 10:09:03.856291012 +0000 UTC m=+177.276664535 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3ad99ed9-56d8-464c-94ce-e861240dd0a5-metrics-certs") pod "network-metrics-daemon-rmw4r" (UID: "3ad99ed9-56d8-464c-94ce-e861240dd0a5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 10:08:32 crc kubenswrapper[4904]: E0223 10:08:32.359804 4904 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 10:08:32 crc kubenswrapper[4904]: I0223 10:08:32.407573 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 02:23:43.240018121 +0000 UTC Feb 23 10:08:33 crc kubenswrapper[4904]: I0223 10:08:33.254293 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:33 crc kubenswrapper[4904]: I0223 10:08:33.254391 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:33 crc kubenswrapper[4904]: I0223 10:08:33.254315 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:33 crc kubenswrapper[4904]: I0223 10:08:33.254478 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:33 crc kubenswrapper[4904]: E0223 10:08:33.254488 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:33 crc kubenswrapper[4904]: E0223 10:08:33.254572 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:08:33 crc kubenswrapper[4904]: E0223 10:08:33.254664 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:08:33 crc kubenswrapper[4904]: E0223 10:08:33.254815 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:08:33 crc kubenswrapper[4904]: I0223 10:08:33.407942 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 14:36:48.367931291 +0000 UTC Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.408769 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 19:07:57.83766044 +0000 UTC Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.676893 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fm2n2_65ad73a3-cf4b-49ec-b994-2d52cb43bc76/kube-multus/0.log" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.677220 4904 generic.go:334] "Generic (PLEG): container finished" podID="65ad73a3-cf4b-49ec-b994-2d52cb43bc76" containerID="7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42" exitCode=1 Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.677275 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fm2n2" event={"ID":"65ad73a3-cf4b-49ec-b994-2d52cb43bc76","Type":"ContainerDied","Data":"7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42"} Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.677909 4904 scope.go:117] "RemoveContainer" containerID="7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.690743 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a41d5b-efe1-4d5f-a483-72d73a51c514\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df16a8fb9b11c51430565d4edccdc4b5fa0084cc56dceda66e205d5a52191ab1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73250754e428fccf72a9ad7ccba12d7c6f9fd91c3813dcf331f72fa4b9268c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eed08557ce51ab7894bcee6fd84e7502ec700f821c9ef928da8bc6c72277d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d3e253fe7eea90aaf8e37024c7a4a43b644f3f67f008f90345ca7f2ac31494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d3e253fe7eea90aaf8e37024c7a4a43b644f3f67f008f90345ca7f2ac31494\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:34Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.701434 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e66580d6-ac94-4391-ae25-c2a99a3cf3bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1997a298c55f1ada0efdb4c50de3589c72a9287cd783a32c87b3b9899a835863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j68r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e8ef66f990d4cfc70a071d30a4b9e3394ae7eb8a2f371d3a05ff3b6803301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j68r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cnx2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:34Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.712246 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:34Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.719654 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b25k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d93217b-ff9a-4324-8f62-8a0c0034367a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11bb65dadaa304f9cb4ccc33474c79ae5589c9af4eab403e995900c582d3b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnzg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b25k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:34Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.729220 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cb76d8-4bf9-49e5-b51a-c55794ba0cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bef3be752976ffb1521721c1f90e4454360a7f2959f6cfcb07a27821b148249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9kpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h4l4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:34Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.741274 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fm2n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65ad73a3-cf4b-49ec-b994-2d52cb43bc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T10:08:33Z\\\",\\\"message\\\":\\\"2026-02-23T10:07:48+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0a13d927-21a3-4ce4-8341-673e93a088d8\\\\n2026-02-23T10:07:48+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0a13d927-21a3-4ce4-8341-673e93a088d8 to /host/opt/cni/bin/\\\\n2026-02-23T10:07:48Z [verbose] multus-daemon started\\\\n2026-02-23T10:07:48Z [verbose] Readiness Indicator file check\\\\n2026-02-23T10:08:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4gcn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fm2n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:34Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.750775 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xrpzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46ee48db-2cbe-4dbf-a8dc-f6657da8cd7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4aa737d0b692c2a8cd62624d4e51f6105499e49a967a4c54a7ee2226a480e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g98lh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xrpzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:34Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.766168 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f16e2e7-7479-4c7d-9582-00896887abcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c51a5e1dbcc617624918a80e94145376683414f7e43429e95e657b69b9ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9436569445e5844e0ad496e8e948d70566baa384bde93677e392ed7b26f9d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2403581aa14e9a86b4079b71f8bdf2d275bf62616c6212433508ed02a2affcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14ce3c7497bb27954e0cc9d2477a30247f2d466db4c49ba6a8a8731a7e57398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03abbd822fe1a13844f9322167c004fa1f713d8791b4bd18ce080b5c6594100e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db070e68d95db326c23e8679b5931cb6d4b1a03dd13257f9fb15217001ffa5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45119d0772cf43e418967b62b77485200e4d9a3d836c3b04b8b24471a04e8cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mv4jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:34Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.787158 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e241faa-67b7-4393-be18-55b315f50b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b9b58dfb3e85a317c7fe68b219d71e5ccf8a1b6254fe5c80171856890f8a4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229196821a0426a12093fdbdf8a52a1f0b68c96ed81e795c11eb36e28ef52ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd1c595a74b8ce87bda5ff96e303c20d4622b38565b0c26f5fb4184eaeffca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cae251c1bcf6cd4a48cb5f71c207368474029578d4dd10364ac1bf7f65dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1a8c084feec839a0449be7ac1cc2f1d109e8b0a0917ce3fb306ef6aabb5833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e817ff29c75f8249bb20bd4137921f51d5ed4250aa5fac5907c3f4e00937b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67963508a3b54b9a0d0515b203cefba5a6c3e8b95ca17acdb3e67880704490eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777ff92e0444b9f7220692c4cb251d431fb00501b322fcd5f1f0622dcdc07e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:34Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.803215 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02cfb4a5-eb71-48e2-b719-1dd674114a42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T10:07:01Z\\\",\\\"message\\\":\\\"W0223 10:07:00.561254 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 10:07:00.562047 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771841220 cert, and key in /tmp/serving-cert-4023114581/serving-signer.crt, /tmp/serving-cert-4023114581/serving-signer.key\\\\nI0223 10:07:00.925497 1 observer_polling.go:159] Starting file observer\\\\nW0223 10:07:00.928065 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:07:00Z is after 2026-02-23T05:33:16Z\\\\nI0223 10:07:00.928251 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 10:07:00.929078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023114581/tls.crt::/tmp/serving-cert-4023114581/tls.key\\\\\\\"\\\\nF0223 10:07:01.120658 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:34Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.816925 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5115f09c66245c2c09d48bc6a68f81fdc70991e4cea879a9f7796a7d8981a06f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:34Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.826553 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9189fa7e5badae537ed1adab5033b3a14beeeb9fbad4a5cb1074c3e38366db6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:34Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.837091 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef5bacc8a27803966919ef72f222dcc34da5d6974acd37895c311ec8cce232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a1b790e34e8ab9c3f8e93c7f3d57c5e34c19024c9a6590bbc2d16a32ab9d835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:34Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.847344 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:34Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.857857 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95c2e4ea-6d69-4463-93c8-a56ca2884b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ffd75179d7d79c4dbfaa251051082a180daf0c364a8009161f356c738ebba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce224d3ae56e058b9f0fb48815870d7fe8bb54cee075f988293b3a2f8ae609\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:06:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:34Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.869080 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:34Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.878035 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.878312 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.878446 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.878564 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.878835 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:34Z","lastTransitionTime":"2026-02-23T10:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.889204 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0acf61bd-42c5-4566-ac29-815afead2012\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T10:08:12Z\\\",\\\"message\\\":\\\"bm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:catalog-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0072d6ce7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https-metrics,Protocol:TCP,Port:8443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: catalog-operator,},ClusterIP:10.217.5.204,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.204],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0223 10:08:12.160211 7034 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T10:08:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9h7jb_openshift-ovn-kubernetes(0acf61bd-42c5-4566-ac29-815afead2012)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T10:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T10:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T10:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9h7jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:34Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:34 crc kubenswrapper[4904]: E0223 10:08:34.891915 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:34Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.894971 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.895098 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.895215 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.895348 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.895467 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:34Z","lastTransitionTime":"2026-02-23T10:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.902218 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rmw4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ad99ed9-56d8-464c-94ce-e861240dd0a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T10:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn9mm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn9mm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T10:07:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rmw4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:34Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:34 crc kubenswrapper[4904]: E0223 10:08:34.905592 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:34Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.908986 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.909149 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.909279 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.909394 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.909508 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:34Z","lastTransitionTime":"2026-02-23T10:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:34 crc kubenswrapper[4904]: E0223 10:08:34.921114 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:34Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.924518 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.924579 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.924599 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.924624 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.924667 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:34Z","lastTransitionTime":"2026-02-23T10:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:34 crc kubenswrapper[4904]: E0223 10:08:34.934863 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:34Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.938169 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.938215 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.938226 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.938243 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:34 crc kubenswrapper[4904]: I0223 10:08:34.938253 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:34Z","lastTransitionTime":"2026-02-23T10:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:34 crc kubenswrapper[4904]: E0223 10:08:34.949556 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T10:08:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2177c7d2-fddd-4945-9ead-9ca47cb98812\\\",\\\"systemUUID\\\":\\\"a448a05d-7f8e-4a16-bfb9-e12591dd55db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T10:08:34Z is after 2025-08-24T17:21:41Z" Feb 23 10:08:34 crc kubenswrapper[4904]: E0223 10:08:34.949695 4904 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 10:08:35 crc kubenswrapper[4904]: I0223 10:08:35.254530 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:35 crc kubenswrapper[4904]: E0223 10:08:35.254985 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:08:35 crc kubenswrapper[4904]: I0223 10:08:35.254571 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:35 crc kubenswrapper[4904]: E0223 10:08:35.255208 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:08:35 crc kubenswrapper[4904]: I0223 10:08:35.254544 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:35 crc kubenswrapper[4904]: E0223 10:08:35.255405 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:35 crc kubenswrapper[4904]: I0223 10:08:35.255542 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:35 crc kubenswrapper[4904]: E0223 10:08:35.255698 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:08:35 crc kubenswrapper[4904]: I0223 10:08:35.409706 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 09:41:19.283363783 +0000 UTC Feb 23 10:08:35 crc kubenswrapper[4904]: I0223 10:08:35.681109 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fm2n2_65ad73a3-cf4b-49ec-b994-2d52cb43bc76/kube-multus/0.log" Feb 23 10:08:35 crc kubenswrapper[4904]: I0223 10:08:35.681160 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fm2n2" event={"ID":"65ad73a3-cf4b-49ec-b994-2d52cb43bc76","Type":"ContainerStarted","Data":"152bdc6379dd6bc50cffe55466797d4f53ac52eeb68ca86ce1e5e4b6ef052b83"} Feb 23 10:08:35 crc kubenswrapper[4904]: I0223 10:08:35.720698 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-mv4jm" podStartSLOduration=80.720680007 podStartE2EDuration="1m20.720680007s" podCreationTimestamp="2026-02-23 10:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:08:35.720395679 +0000 UTC m=+149.140769202" watchObservedRunningTime="2026-02-23 10:08:35.720680007 +0000 UTC m=+149.141053520" Feb 23 10:08:35 crc kubenswrapper[4904]: I0223 10:08:35.745041 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=47.745021821 podStartE2EDuration="47.745021821s" podCreationTimestamp="2026-02-23 10:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:08:35.744567758 +0000 UTC m=+149.164941281" watchObservedRunningTime="2026-02-23 10:08:35.745021821 +0000 UTC m=+149.165395334" Feb 23 10:08:35 crc kubenswrapper[4904]: I0223 10:08:35.774734 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=65.774689391 podStartE2EDuration="1m5.774689391s" podCreationTimestamp="2026-02-23 10:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:08:35.760131854 +0000 UTC m=+149.180505367" watchObservedRunningTime="2026-02-23 10:08:35.774689391 +0000 UTC m=+149.195062904" Feb 23 10:08:35 crc kubenswrapper[4904]: I0223 10:08:35.812639 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=60.812621913 podStartE2EDuration="1m0.812621913s" podCreationTimestamp="2026-02-23 10:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:08:35.812120768 +0000 UTC m=+149.232494281" watchObservedRunningTime="2026-02-23 10:08:35.812621913 +0000 UTC m=+149.232995436" Feb 23 10:08:35 crc kubenswrapper[4904]: I0223 10:08:35.868240 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=4.868226303 podStartE2EDuration="4.868226303s" podCreationTimestamp="2026-02-23 10:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:08:35.867857633 +0000 UTC m=+149.288231146" watchObservedRunningTime="2026-02-23 10:08:35.868226303 +0000 UTC m=+149.288599816" Feb 23 10:08:35 crc kubenswrapper[4904]: I0223 10:08:35.878944 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cnx2n" podStartSLOduration=79.878927657 podStartE2EDuration="1m19.878927657s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:08:35.878357991 +0000 UTC m=+149.298731504" watchObservedRunningTime="2026-02-23 10:08:35.878927657 +0000 UTC m=+149.299301170" Feb 23 10:08:35 crc kubenswrapper[4904]: I0223 10:08:35.909588 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-b25k6" podStartSLOduration=80.909571646 podStartE2EDuration="1m20.909571646s" podCreationTimestamp="2026-02-23 10:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:08:35.897464341 +0000 UTC m=+149.317837854" watchObservedRunningTime="2026-02-23 10:08:35.909571646 +0000 UTC m=+149.329945159" Feb 23 10:08:35 crc kubenswrapper[4904]: I0223 10:08:35.909740 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podStartSLOduration=80.909689939 podStartE2EDuration="1m20.909689939s" podCreationTimestamp="2026-02-23 10:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:08:35.909125133 +0000 UTC m=+149.329498656" watchObservedRunningTime="2026-02-23 10:08:35.909689939 +0000 UTC m=+149.330063452" Feb 23 10:08:35 crc kubenswrapper[4904]: I0223 10:08:35.921281 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-fm2n2" podStartSLOduration=80.921265799 podStartE2EDuration="1m20.921265799s" podCreationTimestamp="2026-02-23 10:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:08:35.920901788 +0000 UTC m=+149.341275301" watchObservedRunningTime="2026-02-23 10:08:35.921265799 +0000 UTC m=+149.341639312" Feb 23 10:08:35 crc kubenswrapper[4904]: I0223 10:08:35.929733 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-xrpzc" podStartSLOduration=80.929696236 podStartE2EDuration="1m20.929696236s" podCreationTimestamp="2026-02-23 10:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:08:35.928991715 +0000 UTC m=+149.349365228" watchObservedRunningTime="2026-02-23 10:08:35.929696236 +0000 UTC m=+149.350069749" Feb 23 10:08:36 crc kubenswrapper[4904]: I0223 10:08:36.411532 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 21:13:39.047285035 +0000 UTC Feb 23 10:08:37 crc kubenswrapper[4904]: I0223 10:08:37.255022 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:37 crc kubenswrapper[4904]: I0223 10:08:37.255038 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:37 crc kubenswrapper[4904]: I0223 10:08:37.255100 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:37 crc kubenswrapper[4904]: I0223 10:08:37.255220 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:37 crc kubenswrapper[4904]: E0223 10:08:37.256669 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:37 crc kubenswrapper[4904]: E0223 10:08:37.256704 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:08:37 crc kubenswrapper[4904]: E0223 10:08:37.257107 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:08:37 crc kubenswrapper[4904]: E0223 10:08:37.257304 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:08:37 crc kubenswrapper[4904]: E0223 10:08:37.360300 4904 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 10:08:37 crc kubenswrapper[4904]: I0223 10:08:37.412305 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 21:11:56.8173662 +0000 UTC Feb 23 10:08:38 crc kubenswrapper[4904]: I0223 10:08:38.413323 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 23:50:00.200809622 +0000 UTC Feb 23 10:08:39 crc kubenswrapper[4904]: I0223 10:08:39.255140 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:39 crc kubenswrapper[4904]: E0223 10:08:39.255258 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:08:39 crc kubenswrapper[4904]: I0223 10:08:39.255368 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:39 crc kubenswrapper[4904]: I0223 10:08:39.255415 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:39 crc kubenswrapper[4904]: I0223 10:08:39.255441 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:39 crc kubenswrapper[4904]: E0223 10:08:39.255484 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:08:39 crc kubenswrapper[4904]: E0223 10:08:39.255587 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:39 crc kubenswrapper[4904]: E0223 10:08:39.255690 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:08:39 crc kubenswrapper[4904]: I0223 10:08:39.413495 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 15:11:42.989901305 +0000 UTC Feb 23 10:08:40 crc kubenswrapper[4904]: I0223 10:08:40.414740 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 21:51:01.548830998 +0000 UTC Feb 23 10:08:41 crc kubenswrapper[4904]: I0223 10:08:41.254366 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:41 crc kubenswrapper[4904]: I0223 10:08:41.254456 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:41 crc kubenswrapper[4904]: I0223 10:08:41.254560 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:41 crc kubenswrapper[4904]: E0223 10:08:41.254577 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:41 crc kubenswrapper[4904]: I0223 10:08:41.254605 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:41 crc kubenswrapper[4904]: E0223 10:08:41.254761 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:08:41 crc kubenswrapper[4904]: E0223 10:08:41.254940 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:08:41 crc kubenswrapper[4904]: E0223 10:08:41.255086 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:08:41 crc kubenswrapper[4904]: I0223 10:08:41.416043 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 17:38:59.935722037 +0000 UTC Feb 23 10:08:42 crc kubenswrapper[4904]: I0223 10:08:42.597817 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 01:48:20.39659108 +0000 UTC Feb 23 10:08:42 crc kubenswrapper[4904]: I0223 10:08:42.598618 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:42 crc kubenswrapper[4904]: I0223 10:08:42.598601 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:42 crc kubenswrapper[4904]: E0223 10:08:42.598707 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:08:42 crc kubenswrapper[4904]: E0223 10:08:42.598848 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:42 crc kubenswrapper[4904]: I0223 10:08:42.598873 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:42 crc kubenswrapper[4904]: E0223 10:08:42.598924 4904 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 10:08:42 crc kubenswrapper[4904]: E0223 10:08:42.599084 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:08:42 crc kubenswrapper[4904]: I0223 10:08:42.599704 4904 scope.go:117] "RemoveContainer" containerID="94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7" Feb 23 10:08:42 crc kubenswrapper[4904]: I0223 10:08:42.703438 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9h7jb_0acf61bd-42c5-4566-ac29-815afead2012/ovnkube-controller/2.log" Feb 23 10:08:42 crc kubenswrapper[4904]: I0223 10:08:42.705955 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" event={"ID":"0acf61bd-42c5-4566-ac29-815afead2012","Type":"ContainerStarted","Data":"2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103"} Feb 23 10:08:42 crc kubenswrapper[4904]: I0223 10:08:42.706361 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:08:43 crc kubenswrapper[4904]: I0223 10:08:43.254743 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:43 crc kubenswrapper[4904]: E0223 10:08:43.254881 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:08:43 crc kubenswrapper[4904]: I0223 10:08:43.318313 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" podStartSLOduration=87.318294072 podStartE2EDuration="1m27.318294072s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:08:42.730650462 +0000 UTC m=+156.151023975" watchObservedRunningTime="2026-02-23 10:08:43.318294072 +0000 UTC m=+156.738667575" Feb 23 10:08:43 crc kubenswrapper[4904]: I0223 10:08:43.318607 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rmw4r"] Feb 23 10:08:43 crc kubenswrapper[4904]: I0223 10:08:43.318694 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:43 crc kubenswrapper[4904]: E0223 10:08:43.318821 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:43 crc kubenswrapper[4904]: I0223 10:08:43.598213 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 09:17:13.490870271 +0000 UTC Feb 23 10:08:44 crc kubenswrapper[4904]: I0223 10:08:44.254321 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:44 crc kubenswrapper[4904]: I0223 10:08:44.254411 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:44 crc kubenswrapper[4904]: E0223 10:08:44.254450 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:08:44 crc kubenswrapper[4904]: E0223 10:08:44.254484 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:08:44 crc kubenswrapper[4904]: I0223 10:08:44.598672 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 10:15:50.987121535 +0000 UTC Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.017445 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.017485 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.017494 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.017507 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.017515 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T10:08:45Z","lastTransitionTime":"2026-02-23T10:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.062054 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-4m9h6"] Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.062551 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4m9h6" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.067041 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.067390 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.067698 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.067885 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.118682 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93f29096-0caa-40f8-8dfd-6cc5eb26ce7a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-4m9h6\" (UID: \"93f29096-0caa-40f8-8dfd-6cc5eb26ce7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4m9h6" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.118834 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/93f29096-0caa-40f8-8dfd-6cc5eb26ce7a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-4m9h6\" (UID: \"93f29096-0caa-40f8-8dfd-6cc5eb26ce7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4m9h6" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.118946 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93f29096-0caa-40f8-8dfd-6cc5eb26ce7a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-4m9h6\" (UID: \"93f29096-0caa-40f8-8dfd-6cc5eb26ce7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4m9h6" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.118980 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/93f29096-0caa-40f8-8dfd-6cc5eb26ce7a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-4m9h6\" (UID: \"93f29096-0caa-40f8-8dfd-6cc5eb26ce7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4m9h6" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.119030 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/93f29096-0caa-40f8-8dfd-6cc5eb26ce7a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-4m9h6\" (UID: \"93f29096-0caa-40f8-8dfd-6cc5eb26ce7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4m9h6" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.220517 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/93f29096-0caa-40f8-8dfd-6cc5eb26ce7a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-4m9h6\" (UID: \"93f29096-0caa-40f8-8dfd-6cc5eb26ce7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4m9h6" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.220572 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93f29096-0caa-40f8-8dfd-6cc5eb26ce7a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-4m9h6\" (UID: \"93f29096-0caa-40f8-8dfd-6cc5eb26ce7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4m9h6" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.220618 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/93f29096-0caa-40f8-8dfd-6cc5eb26ce7a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-4m9h6\" (UID: \"93f29096-0caa-40f8-8dfd-6cc5eb26ce7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4m9h6" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.220709 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93f29096-0caa-40f8-8dfd-6cc5eb26ce7a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-4m9h6\" (UID: \"93f29096-0caa-40f8-8dfd-6cc5eb26ce7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4m9h6" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.220760 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/93f29096-0caa-40f8-8dfd-6cc5eb26ce7a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-4m9h6\" (UID: \"93f29096-0caa-40f8-8dfd-6cc5eb26ce7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4m9h6" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.220797 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/93f29096-0caa-40f8-8dfd-6cc5eb26ce7a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-4m9h6\" (UID: \"93f29096-0caa-40f8-8dfd-6cc5eb26ce7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4m9h6" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.220885 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/93f29096-0caa-40f8-8dfd-6cc5eb26ce7a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-4m9h6\" (UID: \"93f29096-0caa-40f8-8dfd-6cc5eb26ce7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4m9h6" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.221867 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/93f29096-0caa-40f8-8dfd-6cc5eb26ce7a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-4m9h6\" (UID: \"93f29096-0caa-40f8-8dfd-6cc5eb26ce7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4m9h6" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.228739 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93f29096-0caa-40f8-8dfd-6cc5eb26ce7a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-4m9h6\" (UID: \"93f29096-0caa-40f8-8dfd-6cc5eb26ce7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4m9h6" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.255250 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.255297 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:45 crc kubenswrapper[4904]: E0223 10:08:45.255383 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:08:45 crc kubenswrapper[4904]: E0223 10:08:45.255599 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.278705 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93f29096-0caa-40f8-8dfd-6cc5eb26ce7a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-4m9h6\" (UID: \"93f29096-0caa-40f8-8dfd-6cc5eb26ce7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4m9h6" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.377672 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4m9h6" Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.600078 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 20:31:19.66884694 +0000 UTC Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.600139 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.609384 4904 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.718375 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4m9h6" event={"ID":"93f29096-0caa-40f8-8dfd-6cc5eb26ce7a","Type":"ContainerStarted","Data":"4994eaa0178e2c10130c1d7527c36208ad84a45d6e298eb37d1804f60c6ca770"} Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.718419 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4m9h6" event={"ID":"93f29096-0caa-40f8-8dfd-6cc5eb26ce7a","Type":"ContainerStarted","Data":"1eca4a9b927ea2f48fcb827d7c2401293839d551f3c762432a4af6cf6acbe751"} Feb 23 10:08:45 crc kubenswrapper[4904]: I0223 10:08:45.737552 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4m9h6" podStartSLOduration=90.737303879 podStartE2EDuration="1m30.737303879s" podCreationTimestamp="2026-02-23 10:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:08:45.735380612 +0000 UTC m=+159.155754185" watchObservedRunningTime="2026-02-23 10:08:45.737303879 +0000 UTC m=+159.157677432" Feb 23 10:08:46 crc kubenswrapper[4904]: I0223 10:08:46.254896 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:46 crc kubenswrapper[4904]: I0223 10:08:46.254946 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:46 crc kubenswrapper[4904]: E0223 10:08:46.255041 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 10:08:46 crc kubenswrapper[4904]: E0223 10:08:46.255222 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 10:08:47 crc kubenswrapper[4904]: I0223 10:08:47.254903 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:47 crc kubenswrapper[4904]: I0223 10:08:47.254986 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:47 crc kubenswrapper[4904]: E0223 10:08:47.255500 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rmw4r" podUID="3ad99ed9-56d8-464c-94ce-e861240dd0a5" Feb 23 10:08:47 crc kubenswrapper[4904]: E0223 10:08:47.255630 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 10:08:47 crc kubenswrapper[4904]: I0223 10:08:47.759001 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:08:48 crc kubenswrapper[4904]: I0223 10:08:48.254690 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:08:48 crc kubenswrapper[4904]: I0223 10:08:48.254867 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:08:48 crc kubenswrapper[4904]: I0223 10:08:48.256612 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 23 10:08:48 crc kubenswrapper[4904]: I0223 10:08:48.257100 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 23 10:08:49 crc kubenswrapper[4904]: I0223 10:08:49.254617 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:08:49 crc kubenswrapper[4904]: I0223 10:08:49.254689 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:08:49 crc kubenswrapper[4904]: I0223 10:08:49.257003 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 23 10:08:49 crc kubenswrapper[4904]: I0223 10:08:49.257021 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 10:08:49 crc kubenswrapper[4904]: I0223 10:08:49.257051 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 23 10:08:49 crc kubenswrapper[4904]: I0223 10:08:49.257368 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 23 10:08:50 crc kubenswrapper[4904]: I0223 10:08:50.275793 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.041531 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.089992 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-scdtm"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.090724 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.091261 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.091945 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.092312 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bclzx"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.092882 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bclzx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.094638 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.095176 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbdkj"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.096212 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbdkj" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.096904 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.097565 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.098194 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zlvdx"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.100775 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zlvdx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.102233 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkrln"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.105752 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-22k6t"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.112105 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkrln" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.117419 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-22k6t" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.118349 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mz6cv"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.119164 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.125050 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.125388 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.125416 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.126075 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.126257 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.126399 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.125271 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.126684 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.126534 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.126586 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.126633 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.129220 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.129380 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.130258 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b-serving-cert\") pod \"authentication-operator-69f744f599-bclzx\" (UID: \"6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bclzx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.130301 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwqgr\" (UniqueName: \"kubernetes.io/projected/879b2a65-a224-4ac5-8d57-ea3b776d4a5c-kube-api-access-pwqgr\") pod \"route-controller-manager-6576b87f9c-b4j2w\" (UID: \"879b2a65-a224-4ac5-8d57-ea3b776d4a5c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.130339 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b-service-ca-bundle\") pod \"authentication-operator-69f744f599-bclzx\" (UID: \"6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bclzx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.130370 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ec24148-b01d-44f3-9be5-7e98da1d93d3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lbdkj\" (UID: \"3ec24148-b01d-44f3-9be5-7e98da1d93d3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbdkj" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.130407 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f76de875-328c-4a57-beac-43e3b38ed141-audit-dir\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.130437 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b-config\") pod \"authentication-operator-69f744f599-bclzx\" (UID: \"6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bclzx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.130464 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f76de875-328c-4a57-beac-43e3b38ed141-config\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.130492 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f76de875-328c-4a57-beac-43e3b38ed141-encryption-config\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.130549 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bclzx\" (UID: \"6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bclzx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.130581 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bxxg\" (UniqueName: \"kubernetes.io/projected/3ec24148-b01d-44f3-9be5-7e98da1d93d3-kube-api-access-7bxxg\") pod \"openshift-apiserver-operator-796bbdcf4f-lbdkj\" (UID: \"3ec24148-b01d-44f3-9be5-7e98da1d93d3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbdkj" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.130615 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djtvv\" (UniqueName: \"kubernetes.io/projected/6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b-kube-api-access-djtvv\") pod \"authentication-operator-69f744f599-bclzx\" (UID: \"6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bclzx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.130646 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf68a5b-b08c-48a8-bfca-214b04069365-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zlvdx\" (UID: \"eaf68a5b-b08c-48a8-bfca-214b04069365\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zlvdx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.130677 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f76de875-328c-4a57-beac-43e3b38ed141-node-pullsecrets\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.130709 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec24148-b01d-44f3-9be5-7e98da1d93d3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lbdkj\" (UID: \"3ec24148-b01d-44f3-9be5-7e98da1d93d3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbdkj" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.130765 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f76de875-328c-4a57-beac-43e3b38ed141-etcd-serving-ca\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.130794 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f76de875-328c-4a57-beac-43e3b38ed141-serving-cert\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.130823 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879b2a65-a224-4ac5-8d57-ea3b776d4a5c-config\") pod \"route-controller-manager-6576b87f9c-b4j2w\" (UID: \"879b2a65-a224-4ac5-8d57-ea3b776d4a5c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.130858 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf68a5b-b08c-48a8-bfca-214b04069365-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zlvdx\" (UID: \"eaf68a5b-b08c-48a8-bfca-214b04069365\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zlvdx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.130885 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f76de875-328c-4a57-beac-43e3b38ed141-trusted-ca-bundle\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.130914 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f76de875-328c-4a57-beac-43e3b38ed141-image-import-ca\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.130958 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5kg8\" (UniqueName: \"kubernetes.io/projected/eaf68a5b-b08c-48a8-bfca-214b04069365-kube-api-access-g5kg8\") pod \"openshift-controller-manager-operator-756b6f6bc6-zlvdx\" (UID: \"eaf68a5b-b08c-48a8-bfca-214b04069365\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zlvdx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.130987 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/879b2a65-a224-4ac5-8d57-ea3b776d4a5c-serving-cert\") pod \"route-controller-manager-6576b87f9c-b4j2w\" (UID: \"879b2a65-a224-4ac5-8d57-ea3b776d4a5c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.131043 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f76de875-328c-4a57-beac-43e3b38ed141-audit\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.131069 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/879b2a65-a224-4ac5-8d57-ea3b776d4a5c-client-ca\") pod \"route-controller-manager-6576b87f9c-b4j2w\" (UID: \"879b2a65-a224-4ac5-8d57-ea3b776d4a5c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.131099 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zntn\" (UniqueName: \"kubernetes.io/projected/f76de875-328c-4a57-beac-43e3b38ed141-kube-api-access-8zntn\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.131155 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f76de875-328c-4a57-beac-43e3b38ed141-etcd-client\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.133986 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.134233 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.134398 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.134475 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.134559 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.134606 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.134635 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.134782 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.134836 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.134979 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.135074 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.135163 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.135285 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.135518 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.135630 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.135763 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.136454 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ls9fb"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.136746 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.137084 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ls9fb" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.140891 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.140962 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.137327 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.137383 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.137439 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.137681 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.137849 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.138607 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.138635 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.142571 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.138703 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.137213 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.141287 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.143378 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.144092 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.145534 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m84cg"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.145846 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.146157 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.146201 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bvb56"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.146275 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m84cg" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.147189 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bvb56" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.147031 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gj9mg"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.148066 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-gj9mg" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.149728 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.150545 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-p5d7h"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.151235 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.152192 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xnfq7"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.152564 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.152945 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.153214 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xnfq7" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.153425 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.158115 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-jnvdm"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.158772 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bm6q7"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.158968 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jnvdm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.161054 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bm6q7" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.161230 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.161527 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.161659 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.166025 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.166226 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.166352 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.166591 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.166888 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.166965 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fkgqc"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.167021 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.167199 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.167447 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.167667 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.168227 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.168263 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.168738 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.169184 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.169301 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.172651 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.172892 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.173014 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.173213 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.173292 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.173841 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.173994 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.174296 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.174490 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.174627 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.174778 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.175300 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.179337 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.185059 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.188515 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.188748 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.188751 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.188934 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.189016 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.189154 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.189275 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.189627 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.189770 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.189882 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.189983 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.190086 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.190197 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.190289 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.190378 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.190481 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.191698 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.192303 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=5.192278573 podStartE2EDuration="5.192278573s" podCreationTimestamp="2026-02-23 10:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:08:55.18945145 +0000 UTC m=+168.609824963" watchObservedRunningTime="2026-02-23 10:08:55.192278573 +0000 UTC m=+168.612652086" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.192594 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qr2nz"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.193474 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qr2nz" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.193637 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.193746 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-pw46x"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.195074 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pw46x" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.198178 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.199886 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.203784 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vcxn6"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.204472 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vcxn6" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.209254 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-kg9d6"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.209850 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.209955 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kg9d6" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.209866 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrs2w"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.210762 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrs2w" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.217807 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-x6bcw"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.218258 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dqz6v"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.218495 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cb2c7"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.218981 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-cb2c7" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.219175 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.219207 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.219497 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dqz6v" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.220948 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-q92ld"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.221523 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q92ld" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.226292 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bclzx"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.227802 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-787c5"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.228595 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-48sfc"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.229085 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-48sfc" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.229089 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-787c5" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.229764 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-d8mh2"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.230328 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8mh2" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232340 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b-serving-cert\") pod \"authentication-operator-69f744f599-bclzx\" (UID: \"6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bclzx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232365 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwqgr\" (UniqueName: \"kubernetes.io/projected/879b2a65-a224-4ac5-8d57-ea3b776d4a5c-kube-api-access-pwqgr\") pod \"route-controller-manager-6576b87f9c-b4j2w\" (UID: \"879b2a65-a224-4ac5-8d57-ea3b776d4a5c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232385 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b-service-ca-bundle\") pod \"authentication-operator-69f744f599-bclzx\" (UID: \"6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bclzx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232402 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ec24148-b01d-44f3-9be5-7e98da1d93d3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lbdkj\" (UID: \"3ec24148-b01d-44f3-9be5-7e98da1d93d3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbdkj" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232421 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a168c48-91ab-4b62-ab5d-0599a0f7427a-config\") pod \"kube-controller-manager-operator-78b949d7b-qr2nz\" (UID: \"1a168c48-91ab-4b62-ab5d-0599a0f7427a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qr2nz" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232441 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f76de875-328c-4a57-beac-43e3b38ed141-audit-dir\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232457 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b-config\") pod \"authentication-operator-69f744f599-bclzx\" (UID: \"6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bclzx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232474 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f76de875-328c-4a57-beac-43e3b38ed141-config\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232491 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92ddm\" (UniqueName: \"kubernetes.io/projected/080d9e4a-a9f2-4e85-b386-b0f1aefe6c14-kube-api-access-92ddm\") pod \"cluster-samples-operator-665b6dd947-hkrln\" (UID: \"080d9e4a-a9f2-4e85-b386-b0f1aefe6c14\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkrln" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232521 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f76de875-328c-4a57-beac-43e3b38ed141-encryption-config\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232545 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a168c48-91ab-4b62-ab5d-0599a0f7427a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qr2nz\" (UID: \"1a168c48-91ab-4b62-ab5d-0599a0f7427a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qr2nz" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232560 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/59e2a4d4-b37b-46f1-9937-045403839c98-machine-approver-tls\") pod \"machine-approver-56656f9798-22k6t\" (UID: \"59e2a4d4-b37b-46f1-9937-045403839c98\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-22k6t" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232576 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bclzx\" (UID: \"6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bclzx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232592 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc97320a-b3d3-4a5f-8dcf-34fb942f6669-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vcxn6\" (UID: \"fc97320a-b3d3-4a5f-8dcf-34fb942f6669\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vcxn6" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232606 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc97320a-b3d3-4a5f-8dcf-34fb942f6669-config\") pod \"kube-apiserver-operator-766d6c64bb-vcxn6\" (UID: \"fc97320a-b3d3-4a5f-8dcf-34fb942f6669\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vcxn6" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232621 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bxxg\" (UniqueName: \"kubernetes.io/projected/3ec24148-b01d-44f3-9be5-7e98da1d93d3-kube-api-access-7bxxg\") pod \"openshift-apiserver-operator-796bbdcf4f-lbdkj\" (UID: \"3ec24148-b01d-44f3-9be5-7e98da1d93d3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbdkj" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232637 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djtvv\" (UniqueName: \"kubernetes.io/projected/6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b-kube-api-access-djtvv\") pod \"authentication-operator-69f744f599-bclzx\" (UID: \"6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bclzx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232652 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf68a5b-b08c-48a8-bfca-214b04069365-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zlvdx\" (UID: \"eaf68a5b-b08c-48a8-bfca-214b04069365\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zlvdx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232668 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f76de875-328c-4a57-beac-43e3b38ed141-node-pullsecrets\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232682 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec24148-b01d-44f3-9be5-7e98da1d93d3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lbdkj\" (UID: \"3ec24148-b01d-44f3-9be5-7e98da1d93d3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbdkj" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232698 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr77j\" (UniqueName: \"kubernetes.io/projected/945901ad-f721-4897-bca6-16436563e92c-kube-api-access-hr77j\") pod \"downloads-7954f5f757-jnvdm\" (UID: \"945901ad-f721-4897-bca6-16436563e92c\") " pod="openshift-console/downloads-7954f5f757-jnvdm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232731 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f76de875-328c-4a57-beac-43e3b38ed141-etcd-serving-ca\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232746 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f76de875-328c-4a57-beac-43e3b38ed141-serving-cert\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232762 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879b2a65-a224-4ac5-8d57-ea3b776d4a5c-config\") pod \"route-controller-manager-6576b87f9c-b4j2w\" (UID: \"879b2a65-a224-4ac5-8d57-ea3b776d4a5c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232781 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf68a5b-b08c-48a8-bfca-214b04069365-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zlvdx\" (UID: \"eaf68a5b-b08c-48a8-bfca-214b04069365\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zlvdx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232803 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/59e2a4d4-b37b-46f1-9937-045403839c98-auth-proxy-config\") pod \"machine-approver-56656f9798-22k6t\" (UID: \"59e2a4d4-b37b-46f1-9937-045403839c98\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-22k6t" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232827 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f76de875-328c-4a57-beac-43e3b38ed141-trusted-ca-bundle\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232849 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc97320a-b3d3-4a5f-8dcf-34fb942f6669-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vcxn6\" (UID: \"fc97320a-b3d3-4a5f-8dcf-34fb942f6669\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vcxn6" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232872 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f76de875-328c-4a57-beac-43e3b38ed141-image-import-ca\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232897 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/080d9e4a-a9f2-4e85-b386-b0f1aefe6c14-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hkrln\" (UID: \"080d9e4a-a9f2-4e85-b386-b0f1aefe6c14\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkrln" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232921 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5kg8\" (UniqueName: \"kubernetes.io/projected/eaf68a5b-b08c-48a8-bfca-214b04069365-kube-api-access-g5kg8\") pod \"openshift-controller-manager-operator-756b6f6bc6-zlvdx\" (UID: \"eaf68a5b-b08c-48a8-bfca-214b04069365\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zlvdx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232936 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/879b2a65-a224-4ac5-8d57-ea3b776d4a5c-serving-cert\") pod \"route-controller-manager-6576b87f9c-b4j2w\" (UID: \"879b2a65-a224-4ac5-8d57-ea3b776d4a5c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232970 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f76de875-328c-4a57-beac-43e3b38ed141-audit\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.232987 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/879b2a65-a224-4ac5-8d57-ea3b776d4a5c-client-ca\") pod \"route-controller-manager-6576b87f9c-b4j2w\" (UID: \"879b2a65-a224-4ac5-8d57-ea3b776d4a5c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.233004 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zntn\" (UniqueName: \"kubernetes.io/projected/f76de875-328c-4a57-beac-43e3b38ed141-kube-api-access-8zntn\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.233020 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f76de875-328c-4a57-beac-43e3b38ed141-etcd-client\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.233035 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59e2a4d4-b37b-46f1-9937-045403839c98-config\") pod \"machine-approver-56656f9798-22k6t\" (UID: \"59e2a4d4-b37b-46f1-9937-045403839c98\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-22k6t" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.233051 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks9v4\" (UniqueName: \"kubernetes.io/projected/59e2a4d4-b37b-46f1-9937-045403839c98-kube-api-access-ks9v4\") pod \"machine-approver-56656f9798-22k6t\" (UID: \"59e2a4d4-b37b-46f1-9937-045403839c98\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-22k6t" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.233070 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a168c48-91ab-4b62-ab5d-0599a0f7427a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qr2nz\" (UID: \"1a168c48-91ab-4b62-ab5d-0599a0f7427a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qr2nz" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.234536 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f76de875-328c-4a57-beac-43e3b38ed141-etcd-serving-ca\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.234758 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec24148-b01d-44f3-9be5-7e98da1d93d3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lbdkj\" (UID: \"3ec24148-b01d-44f3-9be5-7e98da1d93d3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbdkj" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.235443 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b-service-ca-bundle\") pod \"authentication-operator-69f744f599-bclzx\" (UID: \"6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bclzx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.235945 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f76de875-328c-4a57-beac-43e3b38ed141-audit-dir\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.236047 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879b2a65-a224-4ac5-8d57-ea3b776d4a5c-config\") pod \"route-controller-manager-6576b87f9c-b4j2w\" (UID: \"879b2a65-a224-4ac5-8d57-ea3b776d4a5c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.236089 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f76de875-328c-4a57-beac-43e3b38ed141-trusted-ca-bundle\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.236590 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b-config\") pod \"authentication-operator-69f744f599-bclzx\" (UID: \"6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bclzx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.236644 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf68a5b-b08c-48a8-bfca-214b04069365-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zlvdx\" (UID: \"eaf68a5b-b08c-48a8-bfca-214b04069365\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zlvdx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.236985 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f76de875-328c-4a57-beac-43e3b38ed141-node-pullsecrets\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.237160 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bclzx\" (UID: \"6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bclzx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.237634 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f76de875-328c-4a57-beac-43e3b38ed141-config\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.239788 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/879b2a65-a224-4ac5-8d57-ea3b776d4a5c-client-ca\") pod \"route-controller-manager-6576b87f9c-b4j2w\" (UID: \"879b2a65-a224-4ac5-8d57-ea3b776d4a5c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.240474 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f76de875-328c-4a57-beac-43e3b38ed141-audit\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.233053 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.253320 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ec24148-b01d-44f3-9be5-7e98da1d93d3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lbdkj\" (UID: \"3ec24148-b01d-44f3-9be5-7e98da1d93d3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbdkj" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.253674 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b-serving-cert\") pod \"authentication-operator-69f744f599-bclzx\" (UID: \"6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bclzx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.253983 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf68a5b-b08c-48a8-bfca-214b04069365-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zlvdx\" (UID: \"eaf68a5b-b08c-48a8-bfca-214b04069365\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zlvdx" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.254888 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f76de875-328c-4a57-beac-43e3b38ed141-serving-cert\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.255360 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f76de875-328c-4a57-beac-43e3b38ed141-etcd-client\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.255703 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f76de875-328c-4a57-beac-43e3b38ed141-image-import-ca\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.260632 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/879b2a65-a224-4ac5-8d57-ea3b776d4a5c-serving-cert\") pod \"route-controller-manager-6576b87f9c-b4j2w\" (UID: \"879b2a65-a224-4ac5-8d57-ea3b776d4a5c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.260890 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f76de875-328c-4a57-beac-43e3b38ed141-encryption-config\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.261092 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z6f26"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.261850 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-54qmm"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.262534 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z6f26" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.262571 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-54qmm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.265542 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.273807 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.282281 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5bfn9"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.282956 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vfhxr"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.283573 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ctv9r"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.283838 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5bfn9" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.284008 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-j9m4p"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.284143 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vfhxr" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.284419 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530680-dwm5z"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.284981 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ctv9r" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.284989 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.285045 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-dwm5z" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.285133 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-j9m4p" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.285383 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbdkj"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.288641 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8v5w5"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.289192 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8v5w5" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.290924 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-scdtm"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.292232 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.294806 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zlvdx"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.295943 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mz6cv"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.297217 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m84cg"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.298614 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrs2w"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.299689 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qr2nz"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.301023 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-p5d7h"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.302079 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jnvdm"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.303395 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkrln"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.305916 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fkgqc"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.307671 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-x6bcw"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.308848 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.310867 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-fmtkh"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.311484 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fmtkh" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.311914 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.312047 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bvb56"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.313140 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gj9mg"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.314118 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-787c5"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.315216 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vcxn6"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.316274 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cb2c7"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.318326 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-pw46x"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.319192 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-d8mh2"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.320251 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bm6q7"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.321801 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xnfq7"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.323473 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-q92ld"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.324021 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z6f26"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.325056 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ls9fb"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.325989 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-48sfc"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.327787 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dqz6v"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.328628 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ctv9r"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.330538 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8v5w5"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.331047 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-j9m4p"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.332269 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.332784 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-5m7h4"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.333354 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5m7h4" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.333571 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-npj6g"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.334423 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr77j\" (UniqueName: \"kubernetes.io/projected/945901ad-f721-4897-bca6-16436563e92c-kube-api-access-hr77j\") pod \"downloads-7954f5f757-jnvdm\" (UID: \"945901ad-f721-4897-bca6-16436563e92c\") " pod="openshift-console/downloads-7954f5f757-jnvdm" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.334454 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/59e2a4d4-b37b-46f1-9937-045403839c98-auth-proxy-config\") pod \"machine-approver-56656f9798-22k6t\" (UID: \"59e2a4d4-b37b-46f1-9937-045403839c98\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-22k6t" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.334473 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc97320a-b3d3-4a5f-8dcf-34fb942f6669-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vcxn6\" (UID: \"fc97320a-b3d3-4a5f-8dcf-34fb942f6669\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vcxn6" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.334490 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/080d9e4a-a9f2-4e85-b386-b0f1aefe6c14-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hkrln\" (UID: \"080d9e4a-a9f2-4e85-b386-b0f1aefe6c14\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkrln" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.334535 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-npj6g" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.334539 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59e2a4d4-b37b-46f1-9937-045403839c98-config\") pod \"machine-approver-56656f9798-22k6t\" (UID: \"59e2a4d4-b37b-46f1-9937-045403839c98\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-22k6t" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.334756 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks9v4\" (UniqueName: \"kubernetes.io/projected/59e2a4d4-b37b-46f1-9937-045403839c98-kube-api-access-ks9v4\") pod \"machine-approver-56656f9798-22k6t\" (UID: \"59e2a4d4-b37b-46f1-9937-045403839c98\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-22k6t" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.334782 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a168c48-91ab-4b62-ab5d-0599a0f7427a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qr2nz\" (UID: \"1a168c48-91ab-4b62-ab5d-0599a0f7427a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qr2nz" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.334815 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a168c48-91ab-4b62-ab5d-0599a0f7427a-config\") pod \"kube-controller-manager-operator-78b949d7b-qr2nz\" (UID: \"1a168c48-91ab-4b62-ab5d-0599a0f7427a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qr2nz" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.334841 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92ddm\" (UniqueName: \"kubernetes.io/projected/080d9e4a-a9f2-4e85-b386-b0f1aefe6c14-kube-api-access-92ddm\") pod \"cluster-samples-operator-665b6dd947-hkrln\" (UID: \"080d9e4a-a9f2-4e85-b386-b0f1aefe6c14\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkrln" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.334882 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a168c48-91ab-4b62-ab5d-0599a0f7427a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qr2nz\" (UID: \"1a168c48-91ab-4b62-ab5d-0599a0f7427a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qr2nz" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.334927 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/59e2a4d4-b37b-46f1-9937-045403839c98-machine-approver-tls\") pod \"machine-approver-56656f9798-22k6t\" (UID: \"59e2a4d4-b37b-46f1-9937-045403839c98\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-22k6t" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.334947 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc97320a-b3d3-4a5f-8dcf-34fb942f6669-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vcxn6\" (UID: \"fc97320a-b3d3-4a5f-8dcf-34fb942f6669\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vcxn6" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.334970 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc97320a-b3d3-4a5f-8dcf-34fb942f6669-config\") pod \"kube-apiserver-operator-766d6c64bb-vcxn6\" (UID: \"fc97320a-b3d3-4a5f-8dcf-34fb942f6669\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vcxn6" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.335018 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59e2a4d4-b37b-46f1-9937-045403839c98-config\") pod \"machine-approver-56656f9798-22k6t\" (UID: \"59e2a4d4-b37b-46f1-9937-045403839c98\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-22k6t" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.335485 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/59e2a4d4-b37b-46f1-9937-045403839c98-auth-proxy-config\") pod \"machine-approver-56656f9798-22k6t\" (UID: \"59e2a4d4-b37b-46f1-9937-045403839c98\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-22k6t" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.335897 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vfhxr"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.336743 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-54qmm"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.338032 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530680-dwm5z"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.339214 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/080d9e4a-a9f2-4e85-b386-b0f1aefe6c14-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hkrln\" (UID: \"080d9e4a-a9f2-4e85-b386-b0f1aefe6c14\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkrln" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.339854 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5bfn9"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.341052 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5m7h4"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.342370 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-npj6g"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.343564 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-2lhs9"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.344390 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2lhs9" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.344613 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2lhs9"] Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.350155 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/59e2a4d4-b37b-46f1-9937-045403839c98-machine-approver-tls\") pod \"machine-approver-56656f9798-22k6t\" (UID: \"59e2a4d4-b37b-46f1-9937-045403839c98\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-22k6t" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.353308 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.372420 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.413134 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.415881 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a168c48-91ab-4b62-ab5d-0599a0f7427a-config\") pod \"kube-controller-manager-operator-78b949d7b-qr2nz\" (UID: \"1a168c48-91ab-4b62-ab5d-0599a0f7427a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qr2nz" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.432541 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.454408 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.459075 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a168c48-91ab-4b62-ab5d-0599a0f7427a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qr2nz\" (UID: \"1a168c48-91ab-4b62-ab5d-0599a0f7427a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qr2nz" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.473328 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.493042 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.513148 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.533517 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.557432 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.573403 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.612587 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.613018 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.615945 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc97320a-b3d3-4a5f-8dcf-34fb942f6669-config\") pod \"kube-apiserver-operator-766d6c64bb-vcxn6\" (UID: \"fc97320a-b3d3-4a5f-8dcf-34fb942f6669\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vcxn6" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.632793 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.653682 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.658871 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc97320a-b3d3-4a5f-8dcf-34fb942f6669-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vcxn6\" (UID: \"fc97320a-b3d3-4a5f-8dcf-34fb942f6669\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vcxn6" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.692483 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.713033 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.732937 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.752832 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.773471 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.792670 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.813237 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.832575 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.852961 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.873523 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.893412 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.912249 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.933061 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.953218 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.982521 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 23 10:08:55 crc kubenswrapper[4904]: I0223 10:08:55.992516 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.013802 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.034197 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.053464 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.072565 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.092849 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.112889 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.132411 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.152547 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.172782 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.192960 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.213032 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.231392 4904 request.go:700] Waited for 1.001963869s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&limit=500&resourceVersion=0 Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.233217 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.252886 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.273128 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.294407 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.313341 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.333834 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.353350 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.373024 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.392287 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.412837 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.433163 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.467123 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwqgr\" (UniqueName: \"kubernetes.io/projected/879b2a65-a224-4ac5-8d57-ea3b776d4a5c-kube-api-access-pwqgr\") pod \"route-controller-manager-6576b87f9c-b4j2w\" (UID: \"879b2a65-a224-4ac5-8d57-ea3b776d4a5c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.488849 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bxxg\" (UniqueName: \"kubernetes.io/projected/3ec24148-b01d-44f3-9be5-7e98da1d93d3-kube-api-access-7bxxg\") pod \"openshift-apiserver-operator-796bbdcf4f-lbdkj\" (UID: \"3ec24148-b01d-44f3-9be5-7e98da1d93d3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbdkj" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.505818 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djtvv\" (UniqueName: \"kubernetes.io/projected/6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b-kube-api-access-djtvv\") pod \"authentication-operator-69f744f599-bclzx\" (UID: \"6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bclzx" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.528275 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zntn\" (UniqueName: \"kubernetes.io/projected/f76de875-328c-4a57-beac-43e3b38ed141-kube-api-access-8zntn\") pod \"apiserver-76f77b778f-scdtm\" (UID: \"f76de875-328c-4a57-beac-43e3b38ed141\") " pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.546144 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5kg8\" (UniqueName: \"kubernetes.io/projected/eaf68a5b-b08c-48a8-bfca-214b04069365-kube-api-access-g5kg8\") pod \"openshift-controller-manager-operator-756b6f6bc6-zlvdx\" (UID: \"eaf68a5b-b08c-48a8-bfca-214b04069365\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zlvdx" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.552677 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.572293 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.593733 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.612976 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.619795 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.634015 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.648849 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.655544 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.676098 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bclzx" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.678880 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.684516 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbdkj" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.692769 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zlvdx" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.694080 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.714321 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.734332 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.752491 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.773310 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.794466 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.814868 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.833769 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.853988 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.872288 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.893594 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.913116 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.933480 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.952935 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.972568 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 23 10:08:56 crc kubenswrapper[4904]: I0223 10:08:56.992666 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.012301 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.033151 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.053065 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.065854 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w"] Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.066837 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-scdtm"] Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.073452 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.094055 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.101511 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bclzx"] Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.108172 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zlvdx"] Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.109797 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbdkj"] Feb 23 10:08:57 crc kubenswrapper[4904]: W0223 10:08:57.110309 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c03584b_f4a1_4a7b_b7b5_9b19200e8c7b.slice/crio-44efac52c93f0eebeef2b3153cb4259785f563de7a95d3b4b3d7a779c845f58b WatchSource:0}: Error finding container 44efac52c93f0eebeef2b3153cb4259785f563de7a95d3b4b3d7a779c845f58b: Status 404 returned error can't find the container with id 44efac52c93f0eebeef2b3153cb4259785f563de7a95d3b4b3d7a779c845f58b Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.114415 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 23 10:08:57 crc kubenswrapper[4904]: W0223 10:08:57.129466 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ec24148_b01d_44f3_9be5_7e98da1d93d3.slice/crio-1b3dd127b1ebef169243d486c04d75dd92a5d4cb56f4b98253ff365fff497e91 WatchSource:0}: Error finding container 1b3dd127b1ebef169243d486c04d75dd92a5d4cb56f4b98253ff365fff497e91: Status 404 returned error can't find the container with id 1b3dd127b1ebef169243d486c04d75dd92a5d4cb56f4b98253ff365fff497e91 Feb 23 10:08:57 crc kubenswrapper[4904]: W0223 10:08:57.129786 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaf68a5b_b08c_48a8_bfca_214b04069365.slice/crio-0e3ef4a50f15c0b47d94e8e75d187ed425d9386e4966888a4446d5b57677c3d8 WatchSource:0}: Error finding container 0e3ef4a50f15c0b47d94e8e75d187ed425d9386e4966888a4446d5b57677c3d8: Status 404 returned error can't find the container with id 0e3ef4a50f15c0b47d94e8e75d187ed425d9386e4966888a4446d5b57677c3d8 Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.133736 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.153540 4904 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.193254 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.197166 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr77j\" (UniqueName: \"kubernetes.io/projected/945901ad-f721-4897-bca6-16436563e92c-kube-api-access-hr77j\") pod \"downloads-7954f5f757-jnvdm\" (UID: \"945901ad-f721-4897-bca6-16436563e92c\") " pod="openshift-console/downloads-7954f5f757-jnvdm" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.230274 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks9v4\" (UniqueName: \"kubernetes.io/projected/59e2a4d4-b37b-46f1-9937-045403839c98-kube-api-access-ks9v4\") pod \"machine-approver-56656f9798-22k6t\" (UID: \"59e2a4d4-b37b-46f1-9937-045403839c98\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-22k6t" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.231750 4904 request.go:700] Waited for 1.896428964s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.254688 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92ddm\" (UniqueName: \"kubernetes.io/projected/080d9e4a-a9f2-4e85-b386-b0f1aefe6c14-kube-api-access-92ddm\") pod \"cluster-samples-operator-665b6dd947-hkrln\" (UID: \"080d9e4a-a9f2-4e85-b386-b0f1aefe6c14\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkrln" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.266902 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a168c48-91ab-4b62-ab5d-0599a0f7427a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qr2nz\" (UID: \"1a168c48-91ab-4b62-ab5d-0599a0f7427a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qr2nz" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.287815 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc97320a-b3d3-4a5f-8dcf-34fb942f6669-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vcxn6\" (UID: \"fc97320a-b3d3-4a5f-8dcf-34fb942f6669\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vcxn6" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.293088 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.313062 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkrln" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.313295 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.323135 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-22k6t" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.332947 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 23 10:08:57 crc kubenswrapper[4904]: W0223 10:08:57.337416 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59e2a4d4_b37b_46f1_9937_045403839c98.slice/crio-2caaf091e6f9175d81b3e7596101b846776805c526217b6a146d8d26257a9c56 WatchSource:0}: Error finding container 2caaf091e6f9175d81b3e7596101b846776805c526217b6a146d8d26257a9c56: Status 404 returned error can't find the container with id 2caaf091e6f9175d81b3e7596101b846776805c526217b6a146d8d26257a9c56 Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.420352 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jnvdm" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.455203 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qr2nz" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459256 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bcf306b5-f8ab-4774-8163-c2a2b47f1940-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mz6cv\" (UID: \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459283 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459301 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459489 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fb5022ad-ae93-40a3-a025-c192c3d4ead4-etcd-service-ca\") pod \"etcd-operator-b45778765-xnfq7\" (UID: \"fb5022ad-ae93-40a3-a025-c192c3d4ead4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnfq7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459511 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb367911-0f91-48b5-badb-1338ea2de5c1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bm6q7\" (UID: \"fb367911-0f91-48b5-badb-1338ea2de5c1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bm6q7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459535 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459549 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k22j9\" (UniqueName: \"kubernetes.io/projected/0fe2282c-11c4-4545-9301-f417bbe9dee7-kube-api-access-k22j9\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459567 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03eb9af6-09e2-4d8b-ac64-c924049bebee-audit-dir\") pod \"apiserver-7bbb656c7d-ws8fp\" (UID: \"03eb9af6-09e2-4d8b-ac64-c924049bebee\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459587 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdg8t\" (UniqueName: \"kubernetes.io/projected/2e93b452-62da-4c18-953a-231a397caa58-kube-api-access-mdg8t\") pod \"cluster-image-registry-operator-dc59b4c8b-bvb56\" (UID: \"2e93b452-62da-4c18-953a-231a397caa58\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bvb56" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459613 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fe2282c-11c4-4545-9301-f417bbe9dee7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459627 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459644 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de6f2397-7776-4997-a2d3-c6537471079f-trusted-ca\") pod \"console-operator-58897d9998-ls9fb\" (UID: \"de6f2397-7776-4997-a2d3-c6537471079f\") " pod="openshift-console-operator/console-operator-58897d9998-ls9fb" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459658 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/03eb9af6-09e2-4d8b-ac64-c924049bebee-encryption-config\") pod \"apiserver-7bbb656c7d-ws8fp\" (UID: \"03eb9af6-09e2-4d8b-ac64-c924049bebee\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459674 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fe2282c-11c4-4545-9301-f417bbe9dee7-bound-sa-token\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459688 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb5022ad-ae93-40a3-a025-c192c3d4ead4-serving-cert\") pod \"etcd-operator-b45778765-xnfq7\" (UID: \"fb5022ad-ae93-40a3-a025-c192c3d4ead4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnfq7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459707 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb5022ad-ae93-40a3-a025-c192c3d4ead4-config\") pod \"etcd-operator-b45778765-xnfq7\" (UID: \"fb5022ad-ae93-40a3-a025-c192c3d4ead4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnfq7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459755 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459770 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rwmd\" (UniqueName: \"kubernetes.io/projected/fb5022ad-ae93-40a3-a025-c192c3d4ead4-kube-api-access-2rwmd\") pod \"etcd-operator-b45778765-xnfq7\" (UID: \"fb5022ad-ae93-40a3-a025-c192c3d4ead4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnfq7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459784 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fb367911-0f91-48b5-badb-1338ea2de5c1-images\") pod \"machine-api-operator-5694c8668f-bm6q7\" (UID: \"fb367911-0f91-48b5-badb-1338ea2de5c1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bm6q7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459802 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fe2282c-11c4-4545-9301-f417bbe9dee7-registry-certificates\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459822 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fb5022ad-ae93-40a3-a025-c192c3d4ead4-etcd-ca\") pod \"etcd-operator-b45778765-xnfq7\" (UID: \"fb5022ad-ae93-40a3-a025-c192c3d4ead4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnfq7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459857 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459877 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knl6b\" (UniqueName: \"kubernetes.io/projected/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-kube-api-access-knl6b\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459930 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fe2282c-11c4-4545-9301-f417bbe9dee7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459950 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb367911-0f91-48b5-badb-1338ea2de5c1-config\") pod \"machine-api-operator-5694c8668f-bm6q7\" (UID: \"fb367911-0f91-48b5-badb-1338ea2de5c1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bm6q7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459975 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a5af91ff-1c77-4274-b318-fef6771be569-metrics-tls\") pod \"ingress-operator-5b745b69d9-pw46x\" (UID: \"a5af91ff-1c77-4274-b318-fef6771be569\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pw46x" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.459994 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a5af91ff-1c77-4274-b318-fef6771be569-trusted-ca\") pod \"ingress-operator-5b745b69d9-pw46x\" (UID: \"a5af91ff-1c77-4274-b318-fef6771be569\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pw46x" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460014 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q672m\" (UniqueName: \"kubernetes.io/projected/a5f464dd-932e-40b1-98d2-81437ad39aab-kube-api-access-q672m\") pod \"openshift-config-operator-7777fb866f-m84cg\" (UID: \"a5f464dd-932e-40b1-98d2-81437ad39aab\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m84cg" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460045 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fe2282c-11c4-4545-9301-f417bbe9dee7-trusted-ca\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460062 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcf306b5-f8ab-4774-8163-c2a2b47f1940-config\") pod \"controller-manager-879f6c89f-mz6cv\" (UID: \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460080 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5f464dd-932e-40b1-98d2-81437ad39aab-serving-cert\") pod \"openshift-config-operator-7777fb866f-m84cg\" (UID: \"a5f464dd-932e-40b1-98d2-81437ad39aab\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m84cg" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460170 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-audit-policies\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460202 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de6f2397-7776-4997-a2d3-c6537471079f-serving-cert\") pod \"console-operator-58897d9998-ls9fb\" (UID: \"de6f2397-7776-4997-a2d3-c6537471079f\") " pod="openshift-console-operator/console-operator-58897d9998-ls9fb" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460222 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqtdd\" (UniqueName: \"kubernetes.io/projected/a5af91ff-1c77-4274-b318-fef6771be569-kube-api-access-kqtdd\") pod \"ingress-operator-5b745b69d9-pw46x\" (UID: \"a5af91ff-1c77-4274-b318-fef6771be569\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pw46x" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460241 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcf306b5-f8ab-4774-8163-c2a2b47f1940-serving-cert\") pod \"controller-manager-879f6c89f-mz6cv\" (UID: \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460263 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx75x\" (UniqueName: \"kubernetes.io/projected/bcf306b5-f8ab-4774-8163-c2a2b47f1940-kube-api-access-wx75x\") pod \"controller-manager-879f6c89f-mz6cv\" (UID: \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460294 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460349 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460373 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcf306b5-f8ab-4774-8163-c2a2b47f1940-client-ca\") pod \"controller-manager-879f6c89f-mz6cv\" (UID: \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460392 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e93b452-62da-4c18-953a-231a397caa58-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bvb56\" (UID: \"2e93b452-62da-4c18-953a-231a397caa58\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bvb56" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460413 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460430 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e93b452-62da-4c18-953a-231a397caa58-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bvb56\" (UID: \"2e93b452-62da-4c18-953a-231a397caa58\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bvb56" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460455 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b77tg\" (UniqueName: \"kubernetes.io/projected/a666f5d4-c0d8-4714-b04d-64cf96e30345-kube-api-access-b77tg\") pod \"dns-operator-744455d44c-gj9mg\" (UID: \"a666f5d4-c0d8-4714-b04d-64cf96e30345\") " pod="openshift-dns-operator/dns-operator-744455d44c-gj9mg" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460479 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/03eb9af6-09e2-4d8b-ac64-c924049bebee-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ws8fp\" (UID: \"03eb9af6-09e2-4d8b-ac64-c924049bebee\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460493 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a5f464dd-932e-40b1-98d2-81437ad39aab-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m84cg\" (UID: \"a5f464dd-932e-40b1-98d2-81437ad39aab\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m84cg" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460508 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a666f5d4-c0d8-4714-b04d-64cf96e30345-metrics-tls\") pod \"dns-operator-744455d44c-gj9mg\" (UID: \"a666f5d4-c0d8-4714-b04d-64cf96e30345\") " pod="openshift-dns-operator/dns-operator-744455d44c-gj9mg" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460522 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-audit-dir\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460545 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2krx4\" (UniqueName: \"kubernetes.io/projected/de6f2397-7776-4997-a2d3-c6537471079f-kube-api-access-2krx4\") pod \"console-operator-58897d9998-ls9fb\" (UID: \"de6f2397-7776-4997-a2d3-c6537471079f\") " pod="openshift-console-operator/console-operator-58897d9998-ls9fb" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460560 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57jr8\" (UniqueName: \"kubernetes.io/projected/fb367911-0f91-48b5-badb-1338ea2de5c1-kube-api-access-57jr8\") pod \"machine-api-operator-5694c8668f-bm6q7\" (UID: \"fb367911-0f91-48b5-badb-1338ea2de5c1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bm6q7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460587 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de6f2397-7776-4997-a2d3-c6537471079f-config\") pod \"console-operator-58897d9998-ls9fb\" (UID: \"de6f2397-7776-4997-a2d3-c6537471079f\") " pod="openshift-console-operator/console-operator-58897d9998-ls9fb" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460601 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03eb9af6-09e2-4d8b-ac64-c924049bebee-serving-cert\") pod \"apiserver-7bbb656c7d-ws8fp\" (UID: \"03eb9af6-09e2-4d8b-ac64-c924049bebee\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460615 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7pqt\" (UniqueName: \"kubernetes.io/projected/03eb9af6-09e2-4d8b-ac64-c924049bebee-kube-api-access-c7pqt\") pod \"apiserver-7bbb656c7d-ws8fp\" (UID: \"03eb9af6-09e2-4d8b-ac64-c924049bebee\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460635 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460665 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fe2282c-11c4-4545-9301-f417bbe9dee7-registry-tls\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460679 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a5af91ff-1c77-4274-b318-fef6771be569-bound-sa-token\") pod \"ingress-operator-5b745b69d9-pw46x\" (UID: \"a5af91ff-1c77-4274-b318-fef6771be569\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pw46x" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460696 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03eb9af6-09e2-4d8b-ac64-c924049bebee-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ws8fp\" (UID: \"03eb9af6-09e2-4d8b-ac64-c924049bebee\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460728 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e93b452-62da-4c18-953a-231a397caa58-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bvb56\" (UID: \"2e93b452-62da-4c18-953a-231a397caa58\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bvb56" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460750 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460769 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460785 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fb5022ad-ae93-40a3-a025-c192c3d4ead4-etcd-client\") pod \"etcd-operator-b45778765-xnfq7\" (UID: \"fb5022ad-ae93-40a3-a025-c192c3d4ead4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnfq7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460802 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03eb9af6-09e2-4d8b-ac64-c924049bebee-audit-policies\") pod \"apiserver-7bbb656c7d-ws8fp\" (UID: \"03eb9af6-09e2-4d8b-ac64-c924049bebee\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.460816 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/03eb9af6-09e2-4d8b-ac64-c924049bebee-etcd-client\") pod \"apiserver-7bbb656c7d-ws8fp\" (UID: \"03eb9af6-09e2-4d8b-ac64-c924049bebee\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.478234 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vcxn6" Feb 23 10:08:57 crc kubenswrapper[4904]: E0223 10:08:57.478998 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:08:57.97897353 +0000 UTC m=+171.399347043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.482069 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkrln"] Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.562093 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.562274 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc78ff8-2055-4dbe-ae4e-67061adfe881-trusted-ca-bundle\") pod \"console-f9d7485db-x6bcw\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.562301 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6h2n\" (UniqueName: \"kubernetes.io/projected/abc78ff8-2055-4dbe-ae4e-67061adfe881-kube-api-access-q6h2n\") pod \"console-f9d7485db-x6bcw\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.562318 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8445\" (UniqueName: \"kubernetes.io/projected/14a8ee7a-0483-4a58-a9e4-7b26248e998b-kube-api-access-h8445\") pod \"control-plane-machine-set-operator-78cbb6b69f-rrs2w\" (UID: \"14a8ee7a-0483-4a58-a9e4-7b26248e998b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrs2w" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.562335 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cb983a4-cc92-48b0-b722-920f10ca60f1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dqz6v\" (UID: \"6cb983a4-cc92-48b0-b722-920f10ca60f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dqz6v" Feb 23 10:08:57 crc kubenswrapper[4904]: E0223 10:08:57.562366 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:08:58.062338915 +0000 UTC m=+171.482712418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.562395 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f2354c5a-f2fc-41e9-85ec-0ec2498ede86-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ctv9r\" (UID: \"f2354c5a-f2fc-41e9-85ec-0ec2498ede86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ctv9r" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.562418 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t46m5\" (UniqueName: \"kubernetes.io/projected/bbb0c3de-7fa6-4cbe-a56c-2fc7b9ccaeaa-kube-api-access-t46m5\") pod \"machine-config-controller-84d6567774-54qmm\" (UID: \"bbb0c3de-7fa6-4cbe-a56c-2fc7b9ccaeaa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-54qmm" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.562546 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt8s5\" (UniqueName: \"kubernetes.io/projected/5dc90b39-0709-4365-a42f-c8f8330f0be0-kube-api-access-zt8s5\") pod \"ingress-canary-5m7h4\" (UID: \"5dc90b39-0709-4365-a42f-c8f8330f0be0\") " pod="openshift-ingress-canary/ingress-canary-5m7h4" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.562614 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bj55\" (UniqueName: \"kubernetes.io/projected/ccac25dc-84f1-4da8-b7cb-e5e9b4985021-kube-api-access-6bj55\") pod \"machine-config-operator-74547568cd-q92ld\" (UID: \"ccac25dc-84f1-4da8-b7cb-e5e9b4985021\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q92ld" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.562660 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.562773 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a5af91ff-1c77-4274-b318-fef6771be569-trusted-ca\") pod \"ingress-operator-5b745b69d9-pw46x\" (UID: \"a5af91ff-1c77-4274-b318-fef6771be569\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pw46x" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.562819 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fe2282c-11c4-4545-9301-f417bbe9dee7-trusted-ca\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.562853 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/14a8ee7a-0483-4a58-a9e4-7b26248e998b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rrs2w\" (UID: \"14a8ee7a-0483-4a58-a9e4-7b26248e998b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrs2w" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.562878 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74d9be42-ba8c-426b-b4b0-bad0bc65648b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5bfn9\" (UID: \"74d9be42-ba8c-426b-b4b0-bad0bc65648b\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bfn9" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.562902 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6cb983a4-cc92-48b0-b722-920f10ca60f1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dqz6v\" (UID: \"6cb983a4-cc92-48b0-b722-920f10ca60f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dqz6v" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.562931 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5f464dd-932e-40b1-98d2-81437ad39aab-serving-cert\") pod \"openshift-config-operator-7777fb866f-m84cg\" (UID: \"a5f464dd-932e-40b1-98d2-81437ad39aab\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m84cg" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.562956 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bbb0c3de-7fa6-4cbe-a56c-2fc7b9ccaeaa-proxy-tls\") pod \"machine-config-controller-84d6567774-54qmm\" (UID: \"bbb0c3de-7fa6-4cbe-a56c-2fc7b9ccaeaa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-54qmm" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.562995 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-audit-policies\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.563017 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de6f2397-7776-4997-a2d3-c6537471079f-serving-cert\") pod \"console-operator-58897d9998-ls9fb\" (UID: \"de6f2397-7776-4997-a2d3-c6537471079f\") " pod="openshift-console-operator/console-operator-58897d9998-ls9fb" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.563038 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqtdd\" (UniqueName: \"kubernetes.io/projected/a5af91ff-1c77-4274-b318-fef6771be569-kube-api-access-kqtdd\") pod \"ingress-operator-5b745b69d9-pw46x\" (UID: \"a5af91ff-1c77-4274-b318-fef6771be569\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pw46x" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.563060 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sbq9\" (UniqueName: \"kubernetes.io/projected/c8c284af-08a1-403a-9f3e-f7ed477a1bf9-kube-api-access-4sbq9\") pod \"multus-admission-controller-857f4d67dd-cb2c7\" (UID: \"c8c284af-08a1-403a-9f3e-f7ed477a1bf9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cb2c7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.563082 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ljpz\" (UniqueName: \"kubernetes.io/projected/74d9be42-ba8c-426b-b4b0-bad0bc65648b-kube-api-access-2ljpz\") pod \"marketplace-operator-79b997595-5bfn9\" (UID: \"74d9be42-ba8c-426b-b4b0-bad0bc65648b\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bfn9" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.563105 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.563126 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx75x\" (UniqueName: \"kubernetes.io/projected/bcf306b5-f8ab-4774-8163-c2a2b47f1940-kube-api-access-wx75x\") pod \"controller-manager-879f6c89f-mz6cv\" (UID: \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.563170 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.563192 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcf306b5-f8ab-4774-8163-c2a2b47f1940-client-ca\") pod \"controller-manager-879f6c89f-mz6cv\" (UID: \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.563216 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.563245 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b77tg\" (UniqueName: \"kubernetes.io/projected/a666f5d4-c0d8-4714-b04d-64cf96e30345-kube-api-access-b77tg\") pod \"dns-operator-744455d44c-gj9mg\" (UID: \"a666f5d4-c0d8-4714-b04d-64cf96e30345\") " pod="openshift-dns-operator/dns-operator-744455d44c-gj9mg" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.563286 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwhmz\" (UniqueName: \"kubernetes.io/projected/d2c1c227-0297-4ba0-9acb-4690cffd0554-kube-api-access-fwhmz\") pod \"collect-profiles-29530680-dwm5z\" (UID: \"d2c1c227-0297-4ba0-9acb-4690cffd0554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-dwm5z" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.563310 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76xmn\" (UniqueName: \"kubernetes.io/projected/f2354c5a-f2fc-41e9-85ec-0ec2498ede86-kube-api-access-76xmn\") pod \"olm-operator-6b444d44fb-ctv9r\" (UID: \"f2354c5a-f2fc-41e9-85ec-0ec2498ede86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ctv9r" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.563334 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-audit-dir\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.563356 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a249d265-3593-41f9-ac65-9a2b60d436cb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-787c5\" (UID: \"a249d265-3593-41f9-ac65-9a2b60d436cb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-787c5" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.563379 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a666f5d4-c0d8-4714-b04d-64cf96e30345-metrics-tls\") pod \"dns-operator-744455d44c-gj9mg\" (UID: \"a666f5d4-c0d8-4714-b04d-64cf96e30345\") " pod="openshift-dns-operator/dns-operator-744455d44c-gj9mg" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.563844 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/881a7096-caa3-4abc-8a27-f9544300a130-socket-dir\") pod \"csi-hostpathplugin-npj6g\" (UID: \"881a7096-caa3-4abc-8a27-f9544300a130\") " pod="hostpath-provisioner/csi-hostpathplugin-npj6g" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.563874 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74sw2\" (UniqueName: \"kubernetes.io/projected/2b06eca1-c860-4af5-b973-ff30c046462b-kube-api-access-74sw2\") pod \"package-server-manager-789f6589d5-vfhxr\" (UID: \"2b06eca1-c860-4af5-b973-ff30c046462b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vfhxr" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.563907 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03eb9af6-09e2-4d8b-ac64-c924049bebee-serving-cert\") pod \"apiserver-7bbb656c7d-ws8fp\" (UID: \"03eb9af6-09e2-4d8b-ac64-c924049bebee\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.563930 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b06eca1-c860-4af5-b973-ff30c046462b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vfhxr\" (UID: \"2b06eca1-c860-4af5-b973-ff30c046462b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vfhxr" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.563956 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fe2282c-11c4-4545-9301-f417bbe9dee7-registry-tls\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.563981 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a5af91ff-1c77-4274-b318-fef6771be569-bound-sa-token\") pod \"ingress-operator-5b745b69d9-pw46x\" (UID: \"a5af91ff-1c77-4274-b318-fef6771be569\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pw46x" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.564034 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03eb9af6-09e2-4d8b-ac64-c924049bebee-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ws8fp\" (UID: \"03eb9af6-09e2-4d8b-ac64-c924049bebee\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.564059 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.564086 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.563609 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-audit-policies\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.564266 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-audit-dir\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.564288 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcf306b5-f8ab-4774-8163-c2a2b47f1940-client-ca\") pod \"controller-manager-879f6c89f-mz6cv\" (UID: \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.564375 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a5af91ff-1c77-4274-b318-fef6771be569-trusted-ca\") pod \"ingress-operator-5b745b69d9-pw46x\" (UID: \"a5af91ff-1c77-4274-b318-fef6771be569\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pw46x" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.564568 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03eb9af6-09e2-4d8b-ac64-c924049bebee-audit-policies\") pod \"apiserver-7bbb656c7d-ws8fp\" (UID: \"03eb9af6-09e2-4d8b-ac64-c924049bebee\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.564602 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bbb0c3de-7fa6-4cbe-a56c-2fc7b9ccaeaa-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-54qmm\" (UID: \"bbb0c3de-7fa6-4cbe-a56c-2fc7b9ccaeaa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-54qmm" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.564629 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/abb69e50-b84d-4499-a703-72e00ef6ff2a-metrics-certs\") pod \"router-default-5444994796-kg9d6\" (UID: \"abb69e50-b84d-4499-a703-72e00ef6ff2a\") " pod="openshift-ingress/router-default-5444994796-kg9d6" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.564652 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5dc90b39-0709-4365-a42f-c8f8330f0be0-cert\") pod \"ingress-canary-5m7h4\" (UID: \"5dc90b39-0709-4365-a42f-c8f8330f0be0\") " pod="openshift-ingress-canary/ingress-canary-5m7h4" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.564691 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/abb69e50-b84d-4499-a703-72e00ef6ff2a-stats-auth\") pod \"router-default-5444994796-kg9d6\" (UID: \"abb69e50-b84d-4499-a703-72e00ef6ff2a\") " pod="openshift-ingress/router-default-5444994796-kg9d6" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.564734 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fb5022ad-ae93-40a3-a025-c192c3d4ead4-etcd-service-ca\") pod \"etcd-operator-b45778765-xnfq7\" (UID: \"fb5022ad-ae93-40a3-a025-c192c3d4ead4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnfq7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.564764 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/881a7096-caa3-4abc-8a27-f9544300a130-plugins-dir\") pod \"csi-hostpathplugin-npj6g\" (UID: \"881a7096-caa3-4abc-8a27-f9544300a130\") " pod="hostpath-provisioner/csi-hostpathplugin-npj6g" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.565239 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f2354c5a-f2fc-41e9-85ec-0ec2498ede86-srv-cert\") pod \"olm-operator-6b444d44fb-ctv9r\" (UID: \"f2354c5a-f2fc-41e9-85ec-0ec2498ede86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ctv9r" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.565282 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/733bfc69-81cd-4b47-b8ee-2380f9a728b6-config-volume\") pod \"dns-default-2lhs9\" (UID: \"733bfc69-81cd-4b47-b8ee-2380f9a728b6\") " pod="openshift-dns/dns-default-2lhs9" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.565327 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k22j9\" (UniqueName: \"kubernetes.io/projected/0fe2282c-11c4-4545-9301-f417bbe9dee7-kube-api-access-k22j9\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.565356 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdg8t\" (UniqueName: \"kubernetes.io/projected/2e93b452-62da-4c18-953a-231a397caa58-kube-api-access-mdg8t\") pod \"cluster-image-registry-operator-dc59b4c8b-bvb56\" (UID: \"2e93b452-62da-4c18-953a-231a397caa58\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bvb56" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.565383 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkrfh\" (UniqueName: \"kubernetes.io/projected/881a7096-caa3-4abc-8a27-f9544300a130-kube-api-access-gkrfh\") pod \"csi-hostpathplugin-npj6g\" (UID: \"881a7096-caa3-4abc-8a27-f9544300a130\") " pod="hostpath-provisioner/csi-hostpathplugin-npj6g" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.565407 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d58264-b6c1-46e9-b283-95faf671c7de-config\") pod \"service-ca-operator-777779d784-j9m4p\" (UID: \"29d58264-b6c1-46e9-b283-95faf671c7de\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-j9m4p" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.565430 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/881a7096-caa3-4abc-8a27-f9544300a130-registration-dir\") pod \"csi-hostpathplugin-npj6g\" (UID: \"881a7096-caa3-4abc-8a27-f9544300a130\") " pod="hostpath-provisioner/csi-hostpathplugin-npj6g" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.565458 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.565486 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2m69\" (UniqueName: \"kubernetes.io/projected/b6031dca-3a8a-48a8-8dc5-b887407ef01e-kube-api-access-v2m69\") pod \"catalog-operator-68c6474976-48sfc\" (UID: \"b6031dca-3a8a-48a8-8dc5-b887407ef01e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-48sfc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.565509 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de6f2397-7776-4997-a2d3-c6537471079f-trusted-ca\") pod \"console-operator-58897d9998-ls9fb\" (UID: \"de6f2397-7776-4997-a2d3-c6537471079f\") " pod="openshift-console-operator/console-operator-58897d9998-ls9fb" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.565533 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb5022ad-ae93-40a3-a025-c192c3d4ead4-serving-cert\") pod \"etcd-operator-b45778765-xnfq7\" (UID: \"fb5022ad-ae93-40a3-a025-c192c3d4ead4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnfq7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.565558 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.565583 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rwmd\" (UniqueName: \"kubernetes.io/projected/fb5022ad-ae93-40a3-a025-c192c3d4ead4-kube-api-access-2rwmd\") pod \"etcd-operator-b45778765-xnfq7\" (UID: \"fb5022ad-ae93-40a3-a025-c192c3d4ead4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnfq7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.565608 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fe2282c-11c4-4545-9301-f417bbe9dee7-registry-certificates\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.565629 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/02accb03-9a1d-4244-ae51-40a2e1967ab7-tmpfs\") pod \"packageserver-d55dfcdfc-z6f26\" (UID: \"02accb03-9a1d-4244-ae51-40a2e1967ab7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z6f26" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.565654 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/733bfc69-81cd-4b47-b8ee-2380f9a728b6-metrics-tls\") pod \"dns-default-2lhs9\" (UID: \"733bfc69-81cd-4b47-b8ee-2380f9a728b6\") " pod="openshift-dns/dns-default-2lhs9" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.565677 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/881a7096-caa3-4abc-8a27-f9544300a130-csi-data-dir\") pod \"csi-hostpathplugin-npj6g\" (UID: \"881a7096-caa3-4abc-8a27-f9544300a130\") " pod="hostpath-provisioner/csi-hostpathplugin-npj6g" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.565822 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03eb9af6-09e2-4d8b-ac64-c924049bebee-audit-policies\") pod \"apiserver-7bbb656c7d-ws8fp\" (UID: \"03eb9af6-09e2-4d8b-ac64-c924049bebee\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.565704 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fb5022ad-ae93-40a3-a025-c192c3d4ead4-etcd-ca\") pod \"etcd-operator-b45778765-xnfq7\" (UID: \"fb5022ad-ae93-40a3-a025-c192c3d4ead4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnfq7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.565929 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abb69e50-b84d-4499-a703-72e00ef6ff2a-service-ca-bundle\") pod \"router-default-5444994796-kg9d6\" (UID: \"abb69e50-b84d-4499-a703-72e00ef6ff2a\") " pod="openshift-ingress/router-default-5444994796-kg9d6" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.565972 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/02accb03-9a1d-4244-ae51-40a2e1967ab7-webhook-cert\") pod \"packageserver-d55dfcdfc-z6f26\" (UID: \"02accb03-9a1d-4244-ae51-40a2e1967ab7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z6f26" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.565995 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8ab8a442-5053-40b9-8415-4d6f82295dcc-node-bootstrap-token\") pod \"machine-config-server-fmtkh\" (UID: \"8ab8a442-5053-40b9-8415-4d6f82295dcc\") " pod="openshift-machine-config-operator/machine-config-server-fmtkh" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566055 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knl6b\" (UniqueName: \"kubernetes.io/projected/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-kube-api-access-knl6b\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566081 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29d58264-b6c1-46e9-b283-95faf671c7de-serving-cert\") pod \"service-ca-operator-777779d784-j9m4p\" (UID: \"29d58264-b6c1-46e9-b283-95faf671c7de\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-j9m4p" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566087 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566105 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ccac25dc-84f1-4da8-b7cb-e5e9b4985021-proxy-tls\") pod \"machine-config-operator-74547568cd-q92ld\" (UID: \"ccac25dc-84f1-4da8-b7cb-e5e9b4985021\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q92ld" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566154 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fe2282c-11c4-4545-9301-f417bbe9dee7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566178 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb367911-0f91-48b5-badb-1338ea2de5c1-config\") pod \"machine-api-operator-5694c8668f-bm6q7\" (UID: \"fb367911-0f91-48b5-badb-1338ea2de5c1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bm6q7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566203 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6031dca-3a8a-48a8-8dc5-b887407ef01e-srv-cert\") pod \"catalog-operator-68c6474976-48sfc\" (UID: \"b6031dca-3a8a-48a8-8dc5-b887407ef01e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-48sfc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566226 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2c1c227-0297-4ba0-9acb-4690cffd0554-secret-volume\") pod \"collect-profiles-29530680-dwm5z\" (UID: \"d2c1c227-0297-4ba0-9acb-4690cffd0554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-dwm5z" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566250 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74d9be42-ba8c-426b-b4b0-bad0bc65648b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5bfn9\" (UID: \"74d9be42-ba8c-426b-b4b0-bad0bc65648b\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bfn9" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566275 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c8c284af-08a1-403a-9f3e-f7ed477a1bf9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cb2c7\" (UID: \"c8c284af-08a1-403a-9f3e-f7ed477a1bf9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cb2c7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566499 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vq74\" (UniqueName: \"kubernetes.io/projected/588b4893-86b2-4451-930b-48aee4c00761-kube-api-access-2vq74\") pod \"migrator-59844c95c7-d8mh2\" (UID: \"588b4893-86b2-4451-930b-48aee4c00761\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8mh2" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566524 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a5af91ff-1c77-4274-b318-fef6771be569-metrics-tls\") pod \"ingress-operator-5b745b69d9-pw46x\" (UID: \"a5af91ff-1c77-4274-b318-fef6771be569\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pw46x" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566549 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q672m\" (UniqueName: \"kubernetes.io/projected/a5f464dd-932e-40b1-98d2-81437ad39aab-kube-api-access-q672m\") pod \"openshift-config-operator-7777fb866f-m84cg\" (UID: \"a5f464dd-932e-40b1-98d2-81437ad39aab\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m84cg" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566562 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fe2282c-11c4-4545-9301-f417bbe9dee7-trusted-ca\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566587 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w7fv\" (UniqueName: \"kubernetes.io/projected/733bfc69-81cd-4b47-b8ee-2380f9a728b6-kube-api-access-2w7fv\") pod \"dns-default-2lhs9\" (UID: \"733bfc69-81cd-4b47-b8ee-2380f9a728b6\") " pod="openshift-dns/dns-default-2lhs9" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566610 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nddh\" (UniqueName: \"kubernetes.io/projected/abb69e50-b84d-4499-a703-72e00ef6ff2a-kube-api-access-9nddh\") pod \"router-default-5444994796-kg9d6\" (UID: \"abb69e50-b84d-4499-a703-72e00ef6ff2a\") " pod="openshift-ingress/router-default-5444994796-kg9d6" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566665 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcf306b5-f8ab-4774-8163-c2a2b47f1940-config\") pod \"controller-manager-879f6c89f-mz6cv\" (UID: \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566688 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a249d265-3593-41f9-ac65-9a2b60d436cb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-787c5\" (UID: \"a249d265-3593-41f9-ac65-9a2b60d436cb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-787c5" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566734 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6b39937b-9604-4c58-b4f5-fea879103d21-signing-cabundle\") pod \"service-ca-9c57cc56f-8v5w5\" (UID: \"6b39937b-9604-4c58-b4f5-fea879103d21\") " pod="openshift-service-ca/service-ca-9c57cc56f-8v5w5" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566763 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcf306b5-f8ab-4774-8163-c2a2b47f1940-serving-cert\") pod \"controller-manager-879f6c89f-mz6cv\" (UID: \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566787 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb94f\" (UniqueName: \"kubernetes.io/projected/8ab8a442-5053-40b9-8415-4d6f82295dcc-kube-api-access-tb94f\") pod \"machine-config-server-fmtkh\" (UID: \"8ab8a442-5053-40b9-8415-4d6f82295dcc\") " pod="openshift-machine-config-operator/machine-config-server-fmtkh" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566818 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e93b452-62da-4c18-953a-231a397caa58-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bvb56\" (UID: \"2e93b452-62da-4c18-953a-231a397caa58\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bvb56" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566841 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/abc78ff8-2055-4dbe-ae4e-67061adfe881-console-oauth-config\") pod \"console-f9d7485db-x6bcw\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566865 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e93b452-62da-4c18-953a-231a397caa58-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bvb56\" (UID: \"2e93b452-62da-4c18-953a-231a397caa58\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bvb56" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566886 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/abc78ff8-2055-4dbe-ae4e-67061adfe881-oauth-serving-cert\") pod \"console-f9d7485db-x6bcw\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566907 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2c1c227-0297-4ba0-9acb-4690cffd0554-config-volume\") pod \"collect-profiles-29530680-dwm5z\" (UID: \"d2c1c227-0297-4ba0-9acb-4690cffd0554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-dwm5z" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566955 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/03eb9af6-09e2-4d8b-ac64-c924049bebee-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ws8fp\" (UID: \"03eb9af6-09e2-4d8b-ac64-c924049bebee\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.566980 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kq2w\" (UniqueName: \"kubernetes.io/projected/6b39937b-9604-4c58-b4f5-fea879103d21-kube-api-access-4kq2w\") pod \"service-ca-9c57cc56f-8v5w5\" (UID: \"6b39937b-9604-4c58-b4f5-fea879103d21\") " pod="openshift-service-ca/service-ca-9c57cc56f-8v5w5" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567021 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a5f464dd-932e-40b1-98d2-81437ad39aab-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m84cg\" (UID: \"a5f464dd-932e-40b1-98d2-81437ad39aab\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m84cg" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567044 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/abc78ff8-2055-4dbe-ae4e-67061adfe881-console-config\") pod \"console-f9d7485db-x6bcw\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567067 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wvdm\" (UniqueName: \"kubernetes.io/projected/a249d265-3593-41f9-ac65-9a2b60d436cb-kube-api-access-4wvdm\") pod \"kube-storage-version-migrator-operator-b67b599dd-787c5\" (UID: \"a249d265-3593-41f9-ac65-9a2b60d436cb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-787c5" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567094 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57jr8\" (UniqueName: \"kubernetes.io/projected/fb367911-0f91-48b5-badb-1338ea2de5c1-kube-api-access-57jr8\") pod \"machine-api-operator-5694c8668f-bm6q7\" (UID: \"fb367911-0f91-48b5-badb-1338ea2de5c1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bm6q7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567117 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2krx4\" (UniqueName: \"kubernetes.io/projected/de6f2397-7776-4997-a2d3-c6537471079f-kube-api-access-2krx4\") pod \"console-operator-58897d9998-ls9fb\" (UID: \"de6f2397-7776-4997-a2d3-c6537471079f\") " pod="openshift-console-operator/console-operator-58897d9998-ls9fb" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567159 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de6f2397-7776-4997-a2d3-c6537471079f-config\") pod \"console-operator-58897d9998-ls9fb\" (UID: \"de6f2397-7776-4997-a2d3-c6537471079f\") " pod="openshift-console-operator/console-operator-58897d9998-ls9fb" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567181 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7pqt\" (UniqueName: \"kubernetes.io/projected/03eb9af6-09e2-4d8b-ac64-c924049bebee-kube-api-access-c7pqt\") pod \"apiserver-7bbb656c7d-ws8fp\" (UID: \"03eb9af6-09e2-4d8b-ac64-c924049bebee\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567207 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62fcc\" (UniqueName: \"kubernetes.io/projected/29d58264-b6c1-46e9-b283-95faf671c7de-kube-api-access-62fcc\") pod \"service-ca-operator-777779d784-j9m4p\" (UID: \"29d58264-b6c1-46e9-b283-95faf671c7de\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-j9m4p" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567239 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567265 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e93b452-62da-4c18-953a-231a397caa58-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bvb56\" (UID: \"2e93b452-62da-4c18-953a-231a397caa58\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bvb56" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567302 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cb983a4-cc92-48b0-b722-920f10ca60f1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dqz6v\" (UID: \"6cb983a4-cc92-48b0-b722-920f10ca60f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dqz6v" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567330 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fb5022ad-ae93-40a3-a025-c192c3d4ead4-etcd-client\") pod \"etcd-operator-b45778765-xnfq7\" (UID: \"fb5022ad-ae93-40a3-a025-c192c3d4ead4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnfq7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567352 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/03eb9af6-09e2-4d8b-ac64-c924049bebee-etcd-client\") pod \"apiserver-7bbb656c7d-ws8fp\" (UID: \"03eb9af6-09e2-4d8b-ac64-c924049bebee\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567377 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px2hs\" (UniqueName: \"kubernetes.io/projected/02accb03-9a1d-4244-ae51-40a2e1967ab7-kube-api-access-px2hs\") pod \"packageserver-d55dfcdfc-z6f26\" (UID: \"02accb03-9a1d-4244-ae51-40a2e1967ab7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z6f26" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567400 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8ab8a442-5053-40b9-8415-4d6f82295dcc-certs\") pod \"machine-config-server-fmtkh\" (UID: \"8ab8a442-5053-40b9-8415-4d6f82295dcc\") " pod="openshift-machine-config-operator/machine-config-server-fmtkh" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567427 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bcf306b5-f8ab-4774-8163-c2a2b47f1940-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mz6cv\" (UID: \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567449 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/02accb03-9a1d-4244-ae51-40a2e1967ab7-apiservice-cert\") pod \"packageserver-d55dfcdfc-z6f26\" (UID: \"02accb03-9a1d-4244-ae51-40a2e1967ab7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z6f26" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567470 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/abb69e50-b84d-4499-a703-72e00ef6ff2a-default-certificate\") pod \"router-default-5444994796-kg9d6\" (UID: \"abb69e50-b84d-4499-a703-72e00ef6ff2a\") " pod="openshift-ingress/router-default-5444994796-kg9d6" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567498 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567527 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567551 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6031dca-3a8a-48a8-8dc5-b887407ef01e-profile-collector-cert\") pod \"catalog-operator-68c6474976-48sfc\" (UID: \"b6031dca-3a8a-48a8-8dc5-b887407ef01e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-48sfc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567603 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567728 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb367911-0f91-48b5-badb-1338ea2de5c1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bm6q7\" (UID: \"fb367911-0f91-48b5-badb-1338ea2de5c1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bm6q7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567788 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03eb9af6-09e2-4d8b-ac64-c924049bebee-audit-dir\") pod \"apiserver-7bbb656c7d-ws8fp\" (UID: \"03eb9af6-09e2-4d8b-ac64-c924049bebee\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567814 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/abc78ff8-2055-4dbe-ae4e-67061adfe881-service-ca\") pod \"console-f9d7485db-x6bcw\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567835 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6b39937b-9604-4c58-b4f5-fea879103d21-signing-key\") pod \"service-ca-9c57cc56f-8v5w5\" (UID: \"6b39937b-9604-4c58-b4f5-fea879103d21\") " pod="openshift-service-ca/service-ca-9c57cc56f-8v5w5" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567857 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ccac25dc-84f1-4da8-b7cb-e5e9b4985021-auth-proxy-config\") pod \"machine-config-operator-74547568cd-q92ld\" (UID: \"ccac25dc-84f1-4da8-b7cb-e5e9b4985021\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q92ld" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567885 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fe2282c-11c4-4545-9301-f417bbe9dee7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567912 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/03eb9af6-09e2-4d8b-ac64-c924049bebee-encryption-config\") pod \"apiserver-7bbb656c7d-ws8fp\" (UID: \"03eb9af6-09e2-4d8b-ac64-c924049bebee\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567939 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fe2282c-11c4-4545-9301-f417bbe9dee7-bound-sa-token\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567964 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/abc78ff8-2055-4dbe-ae4e-67061adfe881-console-serving-cert\") pod \"console-f9d7485db-x6bcw\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.568024 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb5022ad-ae93-40a3-a025-c192c3d4ead4-config\") pod \"etcd-operator-b45778765-xnfq7\" (UID: \"fb5022ad-ae93-40a3-a025-c192c3d4ead4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnfq7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.568052 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/881a7096-caa3-4abc-8a27-f9544300a130-mountpoint-dir\") pod \"csi-hostpathplugin-npj6g\" (UID: \"881a7096-caa3-4abc-8a27-f9544300a130\") " pod="hostpath-provisioner/csi-hostpathplugin-npj6g" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.568078 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fb367911-0f91-48b5-badb-1338ea2de5c1-images\") pod \"machine-api-operator-5694c8668f-bm6q7\" (UID: \"fb367911-0f91-48b5-badb-1338ea2de5c1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bm6q7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.568101 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ccac25dc-84f1-4da8-b7cb-e5e9b4985021-images\") pod \"machine-config-operator-74547568cd-q92ld\" (UID: \"ccac25dc-84f1-4da8-b7cb-e5e9b4985021\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q92ld" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.568814 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.569532 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fb5022ad-ae93-40a3-a025-c192c3d4ead4-etcd-service-ca\") pod \"etcd-operator-b45778765-xnfq7\" (UID: \"fb5022ad-ae93-40a3-a025-c192c3d4ead4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnfq7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.570114 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de6f2397-7776-4997-a2d3-c6537471079f-trusted-ca\") pod \"console-operator-58897d9998-ls9fb\" (UID: \"de6f2397-7776-4997-a2d3-c6537471079f\") " pod="openshift-console-operator/console-operator-58897d9998-ls9fb" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.571501 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fe2282c-11c4-4545-9301-f417bbe9dee7-registry-certificates\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.572329 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcf306b5-f8ab-4774-8163-c2a2b47f1940-config\") pod \"controller-manager-879f6c89f-mz6cv\" (UID: \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.572799 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fb5022ad-ae93-40a3-a025-c192c3d4ead4-etcd-ca\") pod \"etcd-operator-b45778765-xnfq7\" (UID: \"fb5022ad-ae93-40a3-a025-c192c3d4ead4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnfq7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.573535 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03eb9af6-09e2-4d8b-ac64-c924049bebee-audit-dir\") pod \"apiserver-7bbb656c7d-ws8fp\" (UID: \"03eb9af6-09e2-4d8b-ac64-c924049bebee\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.573557 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.567854 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb367911-0f91-48b5-badb-1338ea2de5c1-config\") pod \"machine-api-operator-5694c8668f-bm6q7\" (UID: \"fb367911-0f91-48b5-badb-1338ea2de5c1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bm6q7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.573913 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fe2282c-11c4-4545-9301-f417bbe9dee7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.574242 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.575292 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bcf306b5-f8ab-4774-8163-c2a2b47f1940-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mz6cv\" (UID: \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.575384 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a666f5d4-c0d8-4714-b04d-64cf96e30345-metrics-tls\") pod \"dns-operator-744455d44c-gj9mg\" (UID: \"a666f5d4-c0d8-4714-b04d-64cf96e30345\") " pod="openshift-dns-operator/dns-operator-744455d44c-gj9mg" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.575730 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb5022ad-ae93-40a3-a025-c192c3d4ead4-config\") pod \"etcd-operator-b45778765-xnfq7\" (UID: \"fb5022ad-ae93-40a3-a025-c192c3d4ead4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnfq7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.576288 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fb367911-0f91-48b5-badb-1338ea2de5c1-images\") pod \"machine-api-operator-5694c8668f-bm6q7\" (UID: \"fb367911-0f91-48b5-badb-1338ea2de5c1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bm6q7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.576701 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fe2282c-11c4-4545-9301-f417bbe9dee7-registry-tls\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.577140 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de6f2397-7776-4997-a2d3-c6537471079f-serving-cert\") pod \"console-operator-58897d9998-ls9fb\" (UID: \"de6f2397-7776-4997-a2d3-c6537471079f\") " pod="openshift-console-operator/console-operator-58897d9998-ls9fb" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.577253 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e93b452-62da-4c18-953a-231a397caa58-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bvb56\" (UID: \"2e93b452-62da-4c18-953a-231a397caa58\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bvb56" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.577625 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de6f2397-7776-4997-a2d3-c6537471079f-config\") pod \"console-operator-58897d9998-ls9fb\" (UID: \"de6f2397-7776-4997-a2d3-c6537471079f\") " pod="openshift-console-operator/console-operator-58897d9998-ls9fb" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.577633 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/03eb9af6-09e2-4d8b-ac64-c924049bebee-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ws8fp\" (UID: \"03eb9af6-09e2-4d8b-ac64-c924049bebee\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:57 crc kubenswrapper[4904]: E0223 10:08:57.577754 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:08:58.077739466 +0000 UTC m=+171.498112979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.578074 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcf306b5-f8ab-4774-8163-c2a2b47f1940-serving-cert\") pod \"controller-manager-879f6c89f-mz6cv\" (UID: \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.578371 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fe2282c-11c4-4545-9301-f417bbe9dee7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.578439 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a5f464dd-932e-40b1-98d2-81437ad39aab-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m84cg\" (UID: \"a5f464dd-932e-40b1-98d2-81437ad39aab\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m84cg" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.579046 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.580096 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.581405 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.581729 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.582774 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a5af91ff-1c77-4274-b318-fef6771be569-metrics-tls\") pod \"ingress-operator-5b745b69d9-pw46x\" (UID: \"a5af91ff-1c77-4274-b318-fef6771be569\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pw46x" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.585040 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03eb9af6-09e2-4d8b-ac64-c924049bebee-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ws8fp\" (UID: \"03eb9af6-09e2-4d8b-ac64-c924049bebee\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.585526 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03eb9af6-09e2-4d8b-ac64-c924049bebee-serving-cert\") pod \"apiserver-7bbb656c7d-ws8fp\" (UID: \"03eb9af6-09e2-4d8b-ac64-c924049bebee\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.587072 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fb5022ad-ae93-40a3-a025-c192c3d4ead4-etcd-client\") pod \"etcd-operator-b45778765-xnfq7\" (UID: \"fb5022ad-ae93-40a3-a025-c192c3d4ead4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnfq7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.587164 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/03eb9af6-09e2-4d8b-ac64-c924049bebee-etcd-client\") pod \"apiserver-7bbb656c7d-ws8fp\" (UID: \"03eb9af6-09e2-4d8b-ac64-c924049bebee\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.587592 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb5022ad-ae93-40a3-a025-c192c3d4ead4-serving-cert\") pod \"etcd-operator-b45778765-xnfq7\" (UID: \"fb5022ad-ae93-40a3-a025-c192c3d4ead4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnfq7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.588313 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e93b452-62da-4c18-953a-231a397caa58-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bvb56\" (UID: \"2e93b452-62da-4c18-953a-231a397caa58\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bvb56" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.589619 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb367911-0f91-48b5-badb-1338ea2de5c1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bm6q7\" (UID: \"fb367911-0f91-48b5-badb-1338ea2de5c1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bm6q7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.590041 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/03eb9af6-09e2-4d8b-ac64-c924049bebee-encryption-config\") pod \"apiserver-7bbb656c7d-ws8fp\" (UID: \"03eb9af6-09e2-4d8b-ac64-c924049bebee\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.591198 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.591440 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.591504 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.596037 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5f464dd-932e-40b1-98d2-81437ad39aab-serving-cert\") pod \"openshift-config-operator-7777fb866f-m84cg\" (UID: \"a5f464dd-932e-40b1-98d2-81437ad39aab\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m84cg" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.618036 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqtdd\" (UniqueName: \"kubernetes.io/projected/a5af91ff-1c77-4274-b318-fef6771be569-kube-api-access-kqtdd\") pod \"ingress-operator-5b745b69d9-pw46x\" (UID: \"a5af91ff-1c77-4274-b318-fef6771be569\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pw46x" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.629814 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b77tg\" (UniqueName: \"kubernetes.io/projected/a666f5d4-c0d8-4714-b04d-64cf96e30345-kube-api-access-b77tg\") pod \"dns-operator-744455d44c-gj9mg\" (UID: \"a666f5d4-c0d8-4714-b04d-64cf96e30345\") " pod="openshift-dns-operator/dns-operator-744455d44c-gj9mg" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.660322 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a5af91ff-1c77-4274-b318-fef6771be569-bound-sa-token\") pod \"ingress-operator-5b745b69d9-pw46x\" (UID: \"a5af91ff-1c77-4274-b318-fef6771be569\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pw46x" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.671527 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.671806 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b06eca1-c860-4af5-b973-ff30c046462b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vfhxr\" (UID: \"2b06eca1-c860-4af5-b973-ff30c046462b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vfhxr" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.671832 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bbb0c3de-7fa6-4cbe-a56c-2fc7b9ccaeaa-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-54qmm\" (UID: \"bbb0c3de-7fa6-4cbe-a56c-2fc7b9ccaeaa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-54qmm" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.671904 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/abb69e50-b84d-4499-a703-72e00ef6ff2a-metrics-certs\") pod \"router-default-5444994796-kg9d6\" (UID: \"abb69e50-b84d-4499-a703-72e00ef6ff2a\") " pod="openshift-ingress/router-default-5444994796-kg9d6" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.671927 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5dc90b39-0709-4365-a42f-c8f8330f0be0-cert\") pod \"ingress-canary-5m7h4\" (UID: \"5dc90b39-0709-4365-a42f-c8f8330f0be0\") " pod="openshift-ingress-canary/ingress-canary-5m7h4" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.671947 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/abb69e50-b84d-4499-a703-72e00ef6ff2a-stats-auth\") pod \"router-default-5444994796-kg9d6\" (UID: \"abb69e50-b84d-4499-a703-72e00ef6ff2a\") " pod="openshift-ingress/router-default-5444994796-kg9d6" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.671996 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/881a7096-caa3-4abc-8a27-f9544300a130-plugins-dir\") pod \"csi-hostpathplugin-npj6g\" (UID: \"881a7096-caa3-4abc-8a27-f9544300a130\") " pod="hostpath-provisioner/csi-hostpathplugin-npj6g" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672013 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f2354c5a-f2fc-41e9-85ec-0ec2498ede86-srv-cert\") pod \"olm-operator-6b444d44fb-ctv9r\" (UID: \"f2354c5a-f2fc-41e9-85ec-0ec2498ede86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ctv9r" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672027 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/733bfc69-81cd-4b47-b8ee-2380f9a728b6-config-volume\") pod \"dns-default-2lhs9\" (UID: \"733bfc69-81cd-4b47-b8ee-2380f9a728b6\") " pod="openshift-dns/dns-default-2lhs9" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672058 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkrfh\" (UniqueName: \"kubernetes.io/projected/881a7096-caa3-4abc-8a27-f9544300a130-kube-api-access-gkrfh\") pod \"csi-hostpathplugin-npj6g\" (UID: \"881a7096-caa3-4abc-8a27-f9544300a130\") " pod="hostpath-provisioner/csi-hostpathplugin-npj6g" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672119 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d58264-b6c1-46e9-b283-95faf671c7de-config\") pod \"service-ca-operator-777779d784-j9m4p\" (UID: \"29d58264-b6c1-46e9-b283-95faf671c7de\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-j9m4p" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672134 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/881a7096-caa3-4abc-8a27-f9544300a130-registration-dir\") pod \"csi-hostpathplugin-npj6g\" (UID: \"881a7096-caa3-4abc-8a27-f9544300a130\") " pod="hostpath-provisioner/csi-hostpathplugin-npj6g" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672152 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2m69\" (UniqueName: \"kubernetes.io/projected/b6031dca-3a8a-48a8-8dc5-b887407ef01e-kube-api-access-v2m69\") pod \"catalog-operator-68c6474976-48sfc\" (UID: \"b6031dca-3a8a-48a8-8dc5-b887407ef01e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-48sfc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672209 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/02accb03-9a1d-4244-ae51-40a2e1967ab7-tmpfs\") pod \"packageserver-d55dfcdfc-z6f26\" (UID: \"02accb03-9a1d-4244-ae51-40a2e1967ab7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z6f26" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672240 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/733bfc69-81cd-4b47-b8ee-2380f9a728b6-metrics-tls\") pod \"dns-default-2lhs9\" (UID: \"733bfc69-81cd-4b47-b8ee-2380f9a728b6\") " pod="openshift-dns/dns-default-2lhs9" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672291 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/881a7096-caa3-4abc-8a27-f9544300a130-csi-data-dir\") pod \"csi-hostpathplugin-npj6g\" (UID: \"881a7096-caa3-4abc-8a27-f9544300a130\") " pod="hostpath-provisioner/csi-hostpathplugin-npj6g" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672328 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abb69e50-b84d-4499-a703-72e00ef6ff2a-service-ca-bundle\") pod \"router-default-5444994796-kg9d6\" (UID: \"abb69e50-b84d-4499-a703-72e00ef6ff2a\") " pod="openshift-ingress/router-default-5444994796-kg9d6" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672355 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/02accb03-9a1d-4244-ae51-40a2e1967ab7-webhook-cert\") pod \"packageserver-d55dfcdfc-z6f26\" (UID: \"02accb03-9a1d-4244-ae51-40a2e1967ab7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z6f26" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672371 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8ab8a442-5053-40b9-8415-4d6f82295dcc-node-bootstrap-token\") pod \"machine-config-server-fmtkh\" (UID: \"8ab8a442-5053-40b9-8415-4d6f82295dcc\") " pod="openshift-machine-config-operator/machine-config-server-fmtkh" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672385 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29d58264-b6c1-46e9-b283-95faf671c7de-serving-cert\") pod \"service-ca-operator-777779d784-j9m4p\" (UID: \"29d58264-b6c1-46e9-b283-95faf671c7de\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-j9m4p" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672400 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ccac25dc-84f1-4da8-b7cb-e5e9b4985021-proxy-tls\") pod \"machine-config-operator-74547568cd-q92ld\" (UID: \"ccac25dc-84f1-4da8-b7cb-e5e9b4985021\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q92ld" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672408 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jnvdm"] Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672423 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2c1c227-0297-4ba0-9acb-4690cffd0554-secret-volume\") pod \"collect-profiles-29530680-dwm5z\" (UID: \"d2c1c227-0297-4ba0-9acb-4690cffd0554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-dwm5z" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672511 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74d9be42-ba8c-426b-b4b0-bad0bc65648b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5bfn9\" (UID: \"74d9be42-ba8c-426b-b4b0-bad0bc65648b\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bfn9" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672543 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6031dca-3a8a-48a8-8dc5-b887407ef01e-srv-cert\") pod \"catalog-operator-68c6474976-48sfc\" (UID: \"b6031dca-3a8a-48a8-8dc5-b887407ef01e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-48sfc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672571 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vq74\" (UniqueName: \"kubernetes.io/projected/588b4893-86b2-4451-930b-48aee4c00761-kube-api-access-2vq74\") pod \"migrator-59844c95c7-d8mh2\" (UID: \"588b4893-86b2-4451-930b-48aee4c00761\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8mh2" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672593 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c8c284af-08a1-403a-9f3e-f7ed477a1bf9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cb2c7\" (UID: \"c8c284af-08a1-403a-9f3e-f7ed477a1bf9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cb2c7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672631 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w7fv\" (UniqueName: \"kubernetes.io/projected/733bfc69-81cd-4b47-b8ee-2380f9a728b6-kube-api-access-2w7fv\") pod \"dns-default-2lhs9\" (UID: \"733bfc69-81cd-4b47-b8ee-2380f9a728b6\") " pod="openshift-dns/dns-default-2lhs9" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672655 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nddh\" (UniqueName: \"kubernetes.io/projected/abb69e50-b84d-4499-a703-72e00ef6ff2a-kube-api-access-9nddh\") pod \"router-default-5444994796-kg9d6\" (UID: \"abb69e50-b84d-4499-a703-72e00ef6ff2a\") " pod="openshift-ingress/router-default-5444994796-kg9d6" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672685 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a249d265-3593-41f9-ac65-9a2b60d436cb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-787c5\" (UID: \"a249d265-3593-41f9-ac65-9a2b60d436cb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-787c5" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672725 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6b39937b-9604-4c58-b4f5-fea879103d21-signing-cabundle\") pod \"service-ca-9c57cc56f-8v5w5\" (UID: \"6b39937b-9604-4c58-b4f5-fea879103d21\") " pod="openshift-service-ca/service-ca-9c57cc56f-8v5w5" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672754 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb94f\" (UniqueName: \"kubernetes.io/projected/8ab8a442-5053-40b9-8415-4d6f82295dcc-kube-api-access-tb94f\") pod \"machine-config-server-fmtkh\" (UID: \"8ab8a442-5053-40b9-8415-4d6f82295dcc\") " pod="openshift-machine-config-operator/machine-config-server-fmtkh" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672787 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/abc78ff8-2055-4dbe-ae4e-67061adfe881-console-oauth-config\") pod \"console-f9d7485db-x6bcw\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672810 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/abc78ff8-2055-4dbe-ae4e-67061adfe881-oauth-serving-cert\") pod \"console-f9d7485db-x6bcw\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672833 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2c1c227-0297-4ba0-9acb-4690cffd0554-config-volume\") pod \"collect-profiles-29530680-dwm5z\" (UID: \"d2c1c227-0297-4ba0-9acb-4690cffd0554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-dwm5z" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672869 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kq2w\" (UniqueName: \"kubernetes.io/projected/6b39937b-9604-4c58-b4f5-fea879103d21-kube-api-access-4kq2w\") pod \"service-ca-9c57cc56f-8v5w5\" (UID: \"6b39937b-9604-4c58-b4f5-fea879103d21\") " pod="openshift-service-ca/service-ca-9c57cc56f-8v5w5" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672899 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/abc78ff8-2055-4dbe-ae4e-67061adfe881-console-config\") pod \"console-f9d7485db-x6bcw\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672923 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wvdm\" (UniqueName: \"kubernetes.io/projected/a249d265-3593-41f9-ac65-9a2b60d436cb-kube-api-access-4wvdm\") pod \"kube-storage-version-migrator-operator-b67b599dd-787c5\" (UID: \"a249d265-3593-41f9-ac65-9a2b60d436cb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-787c5" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.672989 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62fcc\" (UniqueName: \"kubernetes.io/projected/29d58264-b6c1-46e9-b283-95faf671c7de-kube-api-access-62fcc\") pod \"service-ca-operator-777779d784-j9m4p\" (UID: \"29d58264-b6c1-46e9-b283-95faf671c7de\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-j9m4p" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673038 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cb983a4-cc92-48b0-b722-920f10ca60f1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dqz6v\" (UID: \"6cb983a4-cc92-48b0-b722-920f10ca60f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dqz6v" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673068 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8ab8a442-5053-40b9-8415-4d6f82295dcc-certs\") pod \"machine-config-server-fmtkh\" (UID: \"8ab8a442-5053-40b9-8415-4d6f82295dcc\") " pod="openshift-machine-config-operator/machine-config-server-fmtkh" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673099 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px2hs\" (UniqueName: \"kubernetes.io/projected/02accb03-9a1d-4244-ae51-40a2e1967ab7-kube-api-access-px2hs\") pod \"packageserver-d55dfcdfc-z6f26\" (UID: \"02accb03-9a1d-4244-ae51-40a2e1967ab7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z6f26" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673124 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/02accb03-9a1d-4244-ae51-40a2e1967ab7-apiservice-cert\") pod \"packageserver-d55dfcdfc-z6f26\" (UID: \"02accb03-9a1d-4244-ae51-40a2e1967ab7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z6f26" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673147 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/abb69e50-b84d-4499-a703-72e00ef6ff2a-default-certificate\") pod \"router-default-5444994796-kg9d6\" (UID: \"abb69e50-b84d-4499-a703-72e00ef6ff2a\") " pod="openshift-ingress/router-default-5444994796-kg9d6" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673172 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6031dca-3a8a-48a8-8dc5-b887407ef01e-profile-collector-cert\") pod \"catalog-operator-68c6474976-48sfc\" (UID: \"b6031dca-3a8a-48a8-8dc5-b887407ef01e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-48sfc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673209 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6b39937b-9604-4c58-b4f5-fea879103d21-signing-key\") pod \"service-ca-9c57cc56f-8v5w5\" (UID: \"6b39937b-9604-4c58-b4f5-fea879103d21\") " pod="openshift-service-ca/service-ca-9c57cc56f-8v5w5" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673236 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/abc78ff8-2055-4dbe-ae4e-67061adfe881-service-ca\") pod \"console-f9d7485db-x6bcw\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673258 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ccac25dc-84f1-4da8-b7cb-e5e9b4985021-auth-proxy-config\") pod \"machine-config-operator-74547568cd-q92ld\" (UID: \"ccac25dc-84f1-4da8-b7cb-e5e9b4985021\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q92ld" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673295 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/abc78ff8-2055-4dbe-ae4e-67061adfe881-console-serving-cert\") pod \"console-f9d7485db-x6bcw\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673318 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/881a7096-caa3-4abc-8a27-f9544300a130-mountpoint-dir\") pod \"csi-hostpathplugin-npj6g\" (UID: \"881a7096-caa3-4abc-8a27-f9544300a130\") " pod="hostpath-provisioner/csi-hostpathplugin-npj6g" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673342 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ccac25dc-84f1-4da8-b7cb-e5e9b4985021-images\") pod \"machine-config-operator-74547568cd-q92ld\" (UID: \"ccac25dc-84f1-4da8-b7cb-e5e9b4985021\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q92ld" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673368 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc78ff8-2055-4dbe-ae4e-67061adfe881-trusted-ca-bundle\") pod \"console-f9d7485db-x6bcw\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673391 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cb983a4-cc92-48b0-b722-920f10ca60f1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dqz6v\" (UID: \"6cb983a4-cc92-48b0-b722-920f10ca60f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dqz6v" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673414 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f2354c5a-f2fc-41e9-85ec-0ec2498ede86-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ctv9r\" (UID: \"f2354c5a-f2fc-41e9-85ec-0ec2498ede86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ctv9r" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673441 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6h2n\" (UniqueName: \"kubernetes.io/projected/abc78ff8-2055-4dbe-ae4e-67061adfe881-kube-api-access-q6h2n\") pod \"console-f9d7485db-x6bcw\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673464 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8445\" (UniqueName: \"kubernetes.io/projected/14a8ee7a-0483-4a58-a9e4-7b26248e998b-kube-api-access-h8445\") pod \"control-plane-machine-set-operator-78cbb6b69f-rrs2w\" (UID: \"14a8ee7a-0483-4a58-a9e4-7b26248e998b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrs2w" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673488 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t46m5\" (UniqueName: \"kubernetes.io/projected/bbb0c3de-7fa6-4cbe-a56c-2fc7b9ccaeaa-kube-api-access-t46m5\") pod \"machine-config-controller-84d6567774-54qmm\" (UID: \"bbb0c3de-7fa6-4cbe-a56c-2fc7b9ccaeaa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-54qmm" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673511 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt8s5\" (UniqueName: \"kubernetes.io/projected/5dc90b39-0709-4365-a42f-c8f8330f0be0-kube-api-access-zt8s5\") pod \"ingress-canary-5m7h4\" (UID: \"5dc90b39-0709-4365-a42f-c8f8330f0be0\") " pod="openshift-ingress-canary/ingress-canary-5m7h4" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673534 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bj55\" (UniqueName: \"kubernetes.io/projected/ccac25dc-84f1-4da8-b7cb-e5e9b4985021-kube-api-access-6bj55\") pod \"machine-config-operator-74547568cd-q92ld\" (UID: \"ccac25dc-84f1-4da8-b7cb-e5e9b4985021\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q92ld" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673567 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/14a8ee7a-0483-4a58-a9e4-7b26248e998b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rrs2w\" (UID: \"14a8ee7a-0483-4a58-a9e4-7b26248e998b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrs2w" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673595 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74d9be42-ba8c-426b-b4b0-bad0bc65648b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5bfn9\" (UID: \"74d9be42-ba8c-426b-b4b0-bad0bc65648b\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bfn9" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673617 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6cb983a4-cc92-48b0-b722-920f10ca60f1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dqz6v\" (UID: \"6cb983a4-cc92-48b0-b722-920f10ca60f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dqz6v" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673642 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bbb0c3de-7fa6-4cbe-a56c-2fc7b9ccaeaa-proxy-tls\") pod \"machine-config-controller-84d6567774-54qmm\" (UID: \"bbb0c3de-7fa6-4cbe-a56c-2fc7b9ccaeaa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-54qmm" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673685 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ljpz\" (UniqueName: \"kubernetes.io/projected/74d9be42-ba8c-426b-b4b0-bad0bc65648b-kube-api-access-2ljpz\") pod \"marketplace-operator-79b997595-5bfn9\" (UID: \"74d9be42-ba8c-426b-b4b0-bad0bc65648b\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bfn9" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.673701 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/881a7096-caa3-4abc-8a27-f9544300a130-registration-dir\") pod \"csi-hostpathplugin-npj6g\" (UID: \"881a7096-caa3-4abc-8a27-f9544300a130\") " pod="hostpath-provisioner/csi-hostpathplugin-npj6g" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.674352 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/02accb03-9a1d-4244-ae51-40a2e1967ab7-tmpfs\") pod \"packageserver-d55dfcdfc-z6f26\" (UID: \"02accb03-9a1d-4244-ae51-40a2e1967ab7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z6f26" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.674377 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d58264-b6c1-46e9-b283-95faf671c7de-config\") pod \"service-ca-operator-777779d784-j9m4p\" (UID: \"29d58264-b6c1-46e9-b283-95faf671c7de\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-j9m4p" Feb 23 10:08:57 crc kubenswrapper[4904]: E0223 10:08:57.674811 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:08:58.174789502 +0000 UTC m=+171.595163095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.675297 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/881a7096-caa3-4abc-8a27-f9544300a130-csi-data-dir\") pod \"csi-hostpathplugin-npj6g\" (UID: \"881a7096-caa3-4abc-8a27-f9544300a130\") " pod="hostpath-provisioner/csi-hostpathplugin-npj6g" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.676160 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abb69e50-b84d-4499-a703-72e00ef6ff2a-service-ca-bundle\") pod \"router-default-5444994796-kg9d6\" (UID: \"abb69e50-b84d-4499-a703-72e00ef6ff2a\") " pod="openshift-ingress/router-default-5444994796-kg9d6" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.676670 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bbb0c3de-7fa6-4cbe-a56c-2fc7b9ccaeaa-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-54qmm\" (UID: \"bbb0c3de-7fa6-4cbe-a56c-2fc7b9ccaeaa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-54qmm" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.677095 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sbq9\" (UniqueName: \"kubernetes.io/projected/c8c284af-08a1-403a-9f3e-f7ed477a1bf9-kube-api-access-4sbq9\") pod \"multus-admission-controller-857f4d67dd-cb2c7\" (UID: \"c8c284af-08a1-403a-9f3e-f7ed477a1bf9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cb2c7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.677189 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwhmz\" (UniqueName: \"kubernetes.io/projected/d2c1c227-0297-4ba0-9acb-4690cffd0554-kube-api-access-fwhmz\") pod \"collect-profiles-29530680-dwm5z\" (UID: \"d2c1c227-0297-4ba0-9acb-4690cffd0554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-dwm5z" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.677217 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76xmn\" (UniqueName: \"kubernetes.io/projected/f2354c5a-f2fc-41e9-85ec-0ec2498ede86-kube-api-access-76xmn\") pod \"olm-operator-6b444d44fb-ctv9r\" (UID: \"f2354c5a-f2fc-41e9-85ec-0ec2498ede86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ctv9r" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.677247 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a249d265-3593-41f9-ac65-9a2b60d436cb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-787c5\" (UID: \"a249d265-3593-41f9-ac65-9a2b60d436cb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-787c5" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.677275 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/881a7096-caa3-4abc-8a27-f9544300a130-socket-dir\") pod \"csi-hostpathplugin-npj6g\" (UID: \"881a7096-caa3-4abc-8a27-f9544300a130\") " pod="hostpath-provisioner/csi-hostpathplugin-npj6g" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.677304 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74sw2\" (UniqueName: \"kubernetes.io/projected/2b06eca1-c860-4af5-b973-ff30c046462b-kube-api-access-74sw2\") pod \"package-server-manager-789f6589d5-vfhxr\" (UID: \"2b06eca1-c860-4af5-b973-ff30c046462b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vfhxr" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.677562 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5dc90b39-0709-4365-a42f-c8f8330f0be0-cert\") pod \"ingress-canary-5m7h4\" (UID: \"5dc90b39-0709-4365-a42f-c8f8330f0be0\") " pod="openshift-ingress-canary/ingress-canary-5m7h4" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.677732 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2c1c227-0297-4ba0-9acb-4690cffd0554-secret-volume\") pod \"collect-profiles-29530680-dwm5z\" (UID: \"d2c1c227-0297-4ba0-9acb-4690cffd0554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-dwm5z" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.678618 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74d9be42-ba8c-426b-b4b0-bad0bc65648b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5bfn9\" (UID: \"74d9be42-ba8c-426b-b4b0-bad0bc65648b\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bfn9" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.679450 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/733bfc69-81cd-4b47-b8ee-2380f9a728b6-config-volume\") pod \"dns-default-2lhs9\" (UID: \"733bfc69-81cd-4b47-b8ee-2380f9a728b6\") " pod="openshift-dns/dns-default-2lhs9" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.679600 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/abc78ff8-2055-4dbe-ae4e-67061adfe881-service-ca\") pod \"console-f9d7485db-x6bcw\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.680432 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ccac25dc-84f1-4da8-b7cb-e5e9b4985021-auth-proxy-config\") pod \"machine-config-operator-74547568cd-q92ld\" (UID: \"ccac25dc-84f1-4da8-b7cb-e5e9b4985021\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q92ld" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.680922 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/881a7096-caa3-4abc-8a27-f9544300a130-plugins-dir\") pod \"csi-hostpathplugin-npj6g\" (UID: \"881a7096-caa3-4abc-8a27-f9544300a130\") " pod="hostpath-provisioner/csi-hostpathplugin-npj6g" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.681176 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f2354c5a-f2fc-41e9-85ec-0ec2498ede86-srv-cert\") pod \"olm-operator-6b444d44fb-ctv9r\" (UID: \"f2354c5a-f2fc-41e9-85ec-0ec2498ede86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ctv9r" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.681209 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29d58264-b6c1-46e9-b283-95faf671c7de-serving-cert\") pod \"service-ca-operator-777779d784-j9m4p\" (UID: \"29d58264-b6c1-46e9-b283-95faf671c7de\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-j9m4p" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.681327 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/881a7096-caa3-4abc-8a27-f9544300a130-socket-dir\") pod \"csi-hostpathplugin-npj6g\" (UID: \"881a7096-caa3-4abc-8a27-f9544300a130\") " pod="hostpath-provisioner/csi-hostpathplugin-npj6g" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.682356 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/abb69e50-b84d-4499-a703-72e00ef6ff2a-metrics-certs\") pod \"router-default-5444994796-kg9d6\" (UID: \"abb69e50-b84d-4499-a703-72e00ef6ff2a\") " pod="openshift-ingress/router-default-5444994796-kg9d6" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.683015 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6b39937b-9604-4c58-b4f5-fea879103d21-signing-cabundle\") pod \"service-ca-9c57cc56f-8v5w5\" (UID: \"6b39937b-9604-4c58-b4f5-fea879103d21\") " pod="openshift-service-ca/service-ca-9c57cc56f-8v5w5" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.683740 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ccac25dc-84f1-4da8-b7cb-e5e9b4985021-images\") pod \"machine-config-operator-74547568cd-q92ld\" (UID: \"ccac25dc-84f1-4da8-b7cb-e5e9b4985021\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q92ld" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.683816 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/881a7096-caa3-4abc-8a27-f9544300a130-mountpoint-dir\") pod \"csi-hostpathplugin-npj6g\" (UID: \"881a7096-caa3-4abc-8a27-f9544300a130\") " pod="hostpath-provisioner/csi-hostpathplugin-npj6g" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.685599 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a249d265-3593-41f9-ac65-9a2b60d436cb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-787c5\" (UID: \"a249d265-3593-41f9-ac65-9a2b60d436cb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-787c5" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.686123 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/abb69e50-b84d-4499-a703-72e00ef6ff2a-stats-auth\") pod \"router-default-5444994796-kg9d6\" (UID: \"abb69e50-b84d-4499-a703-72e00ef6ff2a\") " pod="openshift-ingress/router-default-5444994796-kg9d6" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.689014 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/733bfc69-81cd-4b47-b8ee-2380f9a728b6-metrics-tls\") pod \"dns-default-2lhs9\" (UID: \"733bfc69-81cd-4b47-b8ee-2380f9a728b6\") " pod="openshift-dns/dns-default-2lhs9" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.689893 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/abc78ff8-2055-4dbe-ae4e-67061adfe881-console-config\") pod \"console-f9d7485db-x6bcw\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.692295 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cb983a4-cc92-48b0-b722-920f10ca60f1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dqz6v\" (UID: \"6cb983a4-cc92-48b0-b722-920f10ca60f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dqz6v" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.692311 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-gj9mg" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.694700 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/abc78ff8-2055-4dbe-ae4e-67061adfe881-oauth-serving-cert\") pod \"console-f9d7485db-x6bcw\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.698190 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qr2nz"] Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.698290 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/abb69e50-b84d-4499-a703-72e00ef6ff2a-default-certificate\") pod \"router-default-5444994796-kg9d6\" (UID: \"abb69e50-b84d-4499-a703-72e00ef6ff2a\") " pod="openshift-ingress/router-default-5444994796-kg9d6" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.698604 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6031dca-3a8a-48a8-8dc5-b887407ef01e-profile-collector-cert\") pod \"catalog-operator-68c6474976-48sfc\" (UID: \"b6031dca-3a8a-48a8-8dc5-b887407ef01e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-48sfc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.699160 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/abc78ff8-2055-4dbe-ae4e-67061adfe881-console-serving-cert\") pod \"console-f9d7485db-x6bcw\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.699264 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ccac25dc-84f1-4da8-b7cb-e5e9b4985021-proxy-tls\") pod \"machine-config-operator-74547568cd-q92ld\" (UID: \"ccac25dc-84f1-4da8-b7cb-e5e9b4985021\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q92ld" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.700012 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2c1c227-0297-4ba0-9acb-4690cffd0554-config-volume\") pod \"collect-profiles-29530680-dwm5z\" (UID: \"d2c1c227-0297-4ba0-9acb-4690cffd0554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-dwm5z" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.700126 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8ab8a442-5053-40b9-8415-4d6f82295dcc-node-bootstrap-token\") pod \"machine-config-server-fmtkh\" (UID: \"8ab8a442-5053-40b9-8415-4d6f82295dcc\") " pod="openshift-machine-config-operator/machine-config-server-fmtkh" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.700253 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx75x\" (UniqueName: \"kubernetes.io/projected/bcf306b5-f8ab-4774-8163-c2a2b47f1940-kube-api-access-wx75x\") pod \"controller-manager-879f6c89f-mz6cv\" (UID: \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.701274 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a249d265-3593-41f9-ac65-9a2b60d436cb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-787c5\" (UID: \"a249d265-3593-41f9-ac65-9a2b60d436cb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-787c5" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.701537 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/14a8ee7a-0483-4a58-a9e4-7b26248e998b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rrs2w\" (UID: \"14a8ee7a-0483-4a58-a9e4-7b26248e998b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrs2w" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.701593 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cb983a4-cc92-48b0-b722-920f10ca60f1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dqz6v\" (UID: \"6cb983a4-cc92-48b0-b722-920f10ca60f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dqz6v" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.701607 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/02accb03-9a1d-4244-ae51-40a2e1967ab7-webhook-cert\") pod \"packageserver-d55dfcdfc-z6f26\" (UID: \"02accb03-9a1d-4244-ae51-40a2e1967ab7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z6f26" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.701864 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc78ff8-2055-4dbe-ae4e-67061adfe881-trusted-ca-bundle\") pod \"console-f9d7485db-x6bcw\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.702215 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k22j9\" (UniqueName: \"kubernetes.io/projected/0fe2282c-11c4-4545-9301-f417bbe9dee7-kube-api-access-k22j9\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.704192 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c8c284af-08a1-403a-9f3e-f7ed477a1bf9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cb2c7\" (UID: \"c8c284af-08a1-403a-9f3e-f7ed477a1bf9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cb2c7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.704997 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74d9be42-ba8c-426b-b4b0-bad0bc65648b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5bfn9\" (UID: \"74d9be42-ba8c-426b-b4b0-bad0bc65648b\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bfn9" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.705480 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f2354c5a-f2fc-41e9-85ec-0ec2498ede86-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ctv9r\" (UID: \"f2354c5a-f2fc-41e9-85ec-0ec2498ede86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ctv9r" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.706125 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bbb0c3de-7fa6-4cbe-a56c-2fc7b9ccaeaa-proxy-tls\") pod \"machine-config-controller-84d6567774-54qmm\" (UID: \"bbb0c3de-7fa6-4cbe-a56c-2fc7b9ccaeaa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-54qmm" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.706549 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/abc78ff8-2055-4dbe-ae4e-67061adfe881-console-oauth-config\") pod \"console-f9d7485db-x6bcw\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.707307 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6b39937b-9604-4c58-b4f5-fea879103d21-signing-key\") pod \"service-ca-9c57cc56f-8v5w5\" (UID: \"6b39937b-9604-4c58-b4f5-fea879103d21\") " pod="openshift-service-ca/service-ca-9c57cc56f-8v5w5" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.709093 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b06eca1-c860-4af5-b973-ff30c046462b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vfhxr\" (UID: \"2b06eca1-c860-4af5-b973-ff30c046462b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vfhxr" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.712689 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdg8t\" (UniqueName: \"kubernetes.io/projected/2e93b452-62da-4c18-953a-231a397caa58-kube-api-access-mdg8t\") pod \"cluster-image-registry-operator-dc59b4c8b-bvb56\" (UID: \"2e93b452-62da-4c18-953a-231a397caa58\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bvb56" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.714432 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8ab8a442-5053-40b9-8415-4d6f82295dcc-certs\") pod \"machine-config-server-fmtkh\" (UID: \"8ab8a442-5053-40b9-8415-4d6f82295dcc\") " pod="openshift-machine-config-operator/machine-config-server-fmtkh" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.719084 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/02accb03-9a1d-4244-ae51-40a2e1967ab7-apiservice-cert\") pod \"packageserver-d55dfcdfc-z6f26\" (UID: \"02accb03-9a1d-4244-ae51-40a2e1967ab7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z6f26" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.719457 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6031dca-3a8a-48a8-8dc5-b887407ef01e-srv-cert\") pod \"catalog-operator-68c6474976-48sfc\" (UID: \"b6031dca-3a8a-48a8-8dc5-b887407ef01e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-48sfc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.729407 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q672m\" (UniqueName: \"kubernetes.io/projected/a5f464dd-932e-40b1-98d2-81437ad39aab-kube-api-access-q672m\") pod \"openshift-config-operator-7777fb866f-m84cg\" (UID: \"a5f464dd-932e-40b1-98d2-81437ad39aab\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m84cg" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.730961 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vcxn6"] Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.746883 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rwmd\" (UniqueName: \"kubernetes.io/projected/fb5022ad-ae93-40a3-a025-c192c3d4ead4-kube-api-access-2rwmd\") pod \"etcd-operator-b45778765-xnfq7\" (UID: \"fb5022ad-ae93-40a3-a025-c192c3d4ead4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xnfq7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.763554 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pw46x" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.768498 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbdkj" event={"ID":"3ec24148-b01d-44f3-9be5-7e98da1d93d3","Type":"ContainerStarted","Data":"d47927363f7f737b603431ecaaca2acec77d659056b5c5eceb19e8e8ee13289c"} Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.768557 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbdkj" event={"ID":"3ec24148-b01d-44f3-9be5-7e98da1d93d3","Type":"ContainerStarted","Data":"1b3dd127b1ebef169243d486c04d75dd92a5d4cb56f4b98253ff365fff497e91"} Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.772837 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jnvdm" event={"ID":"945901ad-f721-4897-bca6-16436563e92c","Type":"ContainerStarted","Data":"f8d1363f06be50c901985f5e80861bae51e805f893ee0587f508109be884e638"} Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.778143 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: E0223 10:08:57.779122 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:08:58.27910712 +0000 UTC m=+171.699480633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.781358 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-22k6t" event={"ID":"59e2a4d4-b37b-46f1-9937-045403839c98","Type":"ContainerStarted","Data":"e9932871c692c59003ea432e629d7b17e1d08ede74f495bb6d93cbe765941e4d"} Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.781480 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-22k6t" event={"ID":"59e2a4d4-b37b-46f1-9937-045403839c98","Type":"ContainerStarted","Data":"2caaf091e6f9175d81b3e7596101b846776805c526217b6a146d8d26257a9c56"} Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.781938 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knl6b\" (UniqueName: \"kubernetes.io/projected/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-kube-api-access-knl6b\") pod \"oauth-openshift-558db77b4-p5d7h\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.785360 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" event={"ID":"879b2a65-a224-4ac5-8d57-ea3b776d4a5c","Type":"ContainerStarted","Data":"6078cf23087b75367db8e2483f10ee52e25b4d4059bc4437006b0179e791bed6"} Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.785415 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" event={"ID":"879b2a65-a224-4ac5-8d57-ea3b776d4a5c","Type":"ContainerStarted","Data":"098772ec483d3453597b881def343ceedb43e2bac78444b69d42f37dae388720"} Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.786284 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fe2282c-11c4-4545-9301-f417bbe9dee7-bound-sa-token\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.786483 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.788552 4904 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-b4j2w container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.788585 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" podUID="879b2a65-a224-4ac5-8d57-ea3b776d4a5c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.791904 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qr2nz" event={"ID":"1a168c48-91ab-4b62-ab5d-0599a0f7427a","Type":"ContainerStarted","Data":"6b772c9532574cd6a15806e32969ac3a0718fe79ef4b2621713f710eb22e94b5"} Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.793904 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkrln" event={"ID":"080d9e4a-a9f2-4e85-b386-b0f1aefe6c14","Type":"ContainerStarted","Data":"491719441bee8beaef6a2ecb2d9b9b1808230718687822238387f25b76407ece"} Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.793930 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkrln" event={"ID":"080d9e4a-a9f2-4e85-b386-b0f1aefe6c14","Type":"ContainerStarted","Data":"6b666001b035ccfcb5b1de35f971f237a16ad9b269b65c1a0b0f25ce7ae9048e"} Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.879318 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.882506 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7pqt\" (UniqueName: \"kubernetes.io/projected/03eb9af6-09e2-4d8b-ac64-c924049bebee-kube-api-access-c7pqt\") pod \"apiserver-7bbb656c7d-ws8fp\" (UID: \"03eb9af6-09e2-4d8b-ac64-c924049bebee\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.882551 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:08:57 crc kubenswrapper[4904]: E0223 10:08:57.883070 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:08:58.383040988 +0000 UTC m=+171.803414501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.884263 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:57 crc kubenswrapper[4904]: E0223 10:08:57.885155 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:08:58.385138699 +0000 UTC m=+171.805512212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.890129 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57jr8\" (UniqueName: \"kubernetes.io/projected/fb367911-0f91-48b5-badb-1338ea2de5c1-kube-api-access-57jr8\") pod \"machine-api-operator-5694c8668f-bm6q7\" (UID: \"fb367911-0f91-48b5-badb-1338ea2de5c1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bm6q7" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.891761 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e93b452-62da-4c18-953a-231a397caa58-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bvb56\" (UID: \"2e93b452-62da-4c18-953a-231a397caa58\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bvb56" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.895649 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2krx4\" (UniqueName: \"kubernetes.io/projected/de6f2397-7776-4997-a2d3-c6537471079f-kube-api-access-2krx4\") pod \"console-operator-58897d9998-ls9fb\" (UID: \"de6f2397-7776-4997-a2d3-c6537471079f\") " pod="openshift-console-operator/console-operator-58897d9998-ls9fb" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.896412 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zlvdx" event={"ID":"eaf68a5b-b08c-48a8-bfca-214b04069365","Type":"ContainerStarted","Data":"cb853a1a8ddfcae4eab0531de5320bf9f553d52d3e66ece9dac80409c9d72754"} Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.896461 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zlvdx" event={"ID":"eaf68a5b-b08c-48a8-bfca-214b04069365","Type":"ContainerStarted","Data":"0e3ef4a50f15c0b47d94e8e75d187ed425d9386e4966888a4446d5b57677c3d8"} Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.900875 4904 generic.go:334] "Generic (PLEG): container finished" podID="f76de875-328c-4a57-beac-43e3b38ed141" containerID="3db3aecbab5dbb126cdd02cd08887889b32425a81468786763bdcd8a8cad5e31" exitCode=0 Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.901048 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-scdtm" event={"ID":"f76de875-328c-4a57-beac-43e3b38ed141","Type":"ContainerDied","Data":"3db3aecbab5dbb126cdd02cd08887889b32425a81468786763bdcd8a8cad5e31"} Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.901093 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-scdtm" event={"ID":"f76de875-328c-4a57-beac-43e3b38ed141","Type":"ContainerStarted","Data":"faa8f97371e1e92263b8583fb76a5140cbba7cdc9d1f9e670faf34555e47015d"} Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.905218 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bclzx" event={"ID":"6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b","Type":"ContainerStarted","Data":"45eb62d15cb6b6dd65ab240f6297946398a17de40b660c60c3ca663f6c37a11c"} Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.905272 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bclzx" event={"ID":"6c03584b-f4a1-4a7b-b7b5-9b19200e8c7b","Type":"ContainerStarted","Data":"44efac52c93f0eebeef2b3153cb4259785f563de7a95d3b4b3d7a779c845f58b"} Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.908731 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vcxn6" event={"ID":"fc97320a-b3d3-4a5f-8dcf-34fb942f6669","Type":"ContainerStarted","Data":"9099d6964598f8bab778af533579f85f17088fd69809f163853dc8e558758967"} Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.916860 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2m69\" (UniqueName: \"kubernetes.io/projected/b6031dca-3a8a-48a8-8dc5-b887407ef01e-kube-api-access-v2m69\") pod \"catalog-operator-68c6474976-48sfc\" (UID: \"b6031dca-3a8a-48a8-8dc5-b887407ef01e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-48sfc" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.939817 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74sw2\" (UniqueName: \"kubernetes.io/projected/2b06eca1-c860-4af5-b973-ff30c046462b-kube-api-access-74sw2\") pod \"package-server-manager-789f6589d5-vfhxr\" (UID: \"2b06eca1-c860-4af5-b973-ff30c046462b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vfhxr" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.951251 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkrfh\" (UniqueName: \"kubernetes.io/projected/881a7096-caa3-4abc-8a27-f9544300a130-kube-api-access-gkrfh\") pod \"csi-hostpathplugin-npj6g\" (UID: \"881a7096-caa3-4abc-8a27-f9544300a130\") " pod="hostpath-provisioner/csi-hostpathplugin-npj6g" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.962582 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ls9fb" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.975165 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m84cg" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.977595 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt8s5\" (UniqueName: \"kubernetes.io/projected/5dc90b39-0709-4365-a42f-c8f8330f0be0-kube-api-access-zt8s5\") pod \"ingress-canary-5m7h4\" (UID: \"5dc90b39-0709-4365-a42f-c8f8330f0be0\") " pod="openshift-ingress-canary/ingress-canary-5m7h4" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.987085 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bvb56" Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.990773 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gj9mg"] Feb 23 10:08:57 crc kubenswrapper[4904]: I0223 10:08:57.992839 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ljpz\" (UniqueName: \"kubernetes.io/projected/74d9be42-ba8c-426b-b4b0-bad0bc65648b-kube-api-access-2ljpz\") pod \"marketplace-operator-79b997595-5bfn9\" (UID: \"74d9be42-ba8c-426b-b4b0-bad0bc65648b\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bfn9" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:57.994137 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:08:58 crc kubenswrapper[4904]: E0223 10:08:57.996115 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:08:58.496093493 +0000 UTC m=+171.916467006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.004382 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.007994 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.010362 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bj55\" (UniqueName: \"kubernetes.io/projected/ccac25dc-84f1-4da8-b7cb-e5e9b4985021-kube-api-access-6bj55\") pod \"machine-config-operator-74547568cd-q92ld\" (UID: \"ccac25dc-84f1-4da8-b7cb-e5e9b4985021\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q92ld" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.015543 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xnfq7" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.030335 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sbq9\" (UniqueName: \"kubernetes.io/projected/c8c284af-08a1-403a-9f3e-f7ed477a1bf9-kube-api-access-4sbq9\") pod \"multus-admission-controller-857f4d67dd-cb2c7\" (UID: \"c8c284af-08a1-403a-9f3e-f7ed477a1bf9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cb2c7" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.031707 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bm6q7" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.050686 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwhmz\" (UniqueName: \"kubernetes.io/projected/d2c1c227-0297-4ba0-9acb-4690cffd0554-kube-api-access-fwhmz\") pod \"collect-profiles-29530680-dwm5z\" (UID: \"d2c1c227-0297-4ba0-9acb-4690cffd0554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-dwm5z" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.071288 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76xmn\" (UniqueName: \"kubernetes.io/projected/f2354c5a-f2fc-41e9-85ec-0ec2498ede86-kube-api-access-76xmn\") pod \"olm-operator-6b444d44fb-ctv9r\" (UID: \"f2354c5a-f2fc-41e9-85ec-0ec2498ede86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ctv9r" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.092987 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6cb983a4-cc92-48b0-b722-920f10ca60f1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dqz6v\" (UID: \"6cb983a4-cc92-48b0-b722-920f10ca60f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dqz6v" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.093872 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-cb2c7" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.106852 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dqz6v" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.107854 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:58 crc kubenswrapper[4904]: E0223 10:08:58.111330 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:08:58.611315951 +0000 UTC m=+172.031689454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.113304 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q92ld" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.124428 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-48sfc" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.141694 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kq2w\" (UniqueName: \"kubernetes.io/projected/6b39937b-9604-4c58-b4f5-fea879103d21-kube-api-access-4kq2w\") pod \"service-ca-9c57cc56f-8v5w5\" (UID: \"6b39937b-9604-4c58-b4f5-fea879103d21\") " pod="openshift-service-ca/service-ca-9c57cc56f-8v5w5" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.154407 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5bfn9" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.163914 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vq74\" (UniqueName: \"kubernetes.io/projected/588b4893-86b2-4451-930b-48aee4c00761-kube-api-access-2vq74\") pod \"migrator-59844c95c7-d8mh2\" (UID: \"588b4893-86b2-4451-930b-48aee4c00761\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8mh2" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.164680 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6h2n\" (UniqueName: \"kubernetes.io/projected/abc78ff8-2055-4dbe-ae4e-67061adfe881-kube-api-access-q6h2n\") pod \"console-f9d7485db-x6bcw\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.175196 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vfhxr" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.175790 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ctv9r" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.178783 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8445\" (UniqueName: \"kubernetes.io/projected/14a8ee7a-0483-4a58-a9e4-7b26248e998b-kube-api-access-h8445\") pod \"control-plane-machine-set-operator-78cbb6b69f-rrs2w\" (UID: \"14a8ee7a-0483-4a58-a9e4-7b26248e998b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrs2w" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.183808 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-dwm5z" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.197309 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t46m5\" (UniqueName: \"kubernetes.io/projected/bbb0c3de-7fa6-4cbe-a56c-2fc7b9ccaeaa-kube-api-access-t46m5\") pod \"machine-config-controller-84d6567774-54qmm\" (UID: \"bbb0c3de-7fa6-4cbe-a56c-2fc7b9ccaeaa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-54qmm" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.197528 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8v5w5" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.209290 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:08:58 crc kubenswrapper[4904]: E0223 10:08:58.209741 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:08:58.709700466 +0000 UTC m=+172.130073979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.212432 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5m7h4" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.215135 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb94f\" (UniqueName: \"kubernetes.io/projected/8ab8a442-5053-40b9-8415-4d6f82295dcc-kube-api-access-tb94f\") pod \"machine-config-server-fmtkh\" (UID: \"8ab8a442-5053-40b9-8415-4d6f82295dcc\") " pod="openshift-machine-config-operator/machine-config-server-fmtkh" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.236369 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-npj6g" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.238183 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w7fv\" (UniqueName: \"kubernetes.io/projected/733bfc69-81cd-4b47-b8ee-2380f9a728b6-kube-api-access-2w7fv\") pod \"dns-default-2lhs9\" (UID: \"733bfc69-81cd-4b47-b8ee-2380f9a728b6\") " pod="openshift-dns/dns-default-2lhs9" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.243192 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2lhs9" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.253607 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nddh\" (UniqueName: \"kubernetes.io/projected/abb69e50-b84d-4499-a703-72e00ef6ff2a-kube-api-access-9nddh\") pod \"router-default-5444994796-kg9d6\" (UID: \"abb69e50-b84d-4499-a703-72e00ef6ff2a\") " pod="openshift-ingress/router-default-5444994796-kg9d6" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.276043 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px2hs\" (UniqueName: \"kubernetes.io/projected/02accb03-9a1d-4244-ae51-40a2e1967ab7-kube-api-access-px2hs\") pod \"packageserver-d55dfcdfc-z6f26\" (UID: \"02accb03-9a1d-4244-ae51-40a2e1967ab7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z6f26" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.295981 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62fcc\" (UniqueName: \"kubernetes.io/projected/29d58264-b6c1-46e9-b283-95faf671c7de-kube-api-access-62fcc\") pod \"service-ca-operator-777779d784-j9m4p\" (UID: \"29d58264-b6c1-46e9-b283-95faf671c7de\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-j9m4p" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.311529 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wvdm\" (UniqueName: \"kubernetes.io/projected/a249d265-3593-41f9-ac65-9a2b60d436cb-kube-api-access-4wvdm\") pod \"kube-storage-version-migrator-operator-b67b599dd-787c5\" (UID: \"a249d265-3593-41f9-ac65-9a2b60d436cb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-787c5" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.312345 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:58 crc kubenswrapper[4904]: E0223 10:08:58.312870 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:08:58.81285582 +0000 UTC m=+172.233229333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.378572 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kg9d6" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.379795 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-pw46x"] Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.384871 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrs2w" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.398751 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.421321 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:08:58 crc kubenswrapper[4904]: E0223 10:08:58.422116 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:08:58.922095403 +0000 UTC m=+172.342468916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.426197 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-787c5" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.432142 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8mh2" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.441949 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z6f26" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.446901 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-54qmm" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.491120 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-j9m4p" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.505375 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fmtkh" Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.523067 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:58 crc kubenswrapper[4904]: E0223 10:08:58.523378 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:08:59.023366032 +0000 UTC m=+172.443739545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.624480 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:08:58 crc kubenswrapper[4904]: E0223 10:08:58.625022 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:08:59.125000731 +0000 UTC m=+172.545374244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.725832 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:58 crc kubenswrapper[4904]: E0223 10:08:58.726233 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:08:59.226218619 +0000 UTC m=+172.646592132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.828529 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:08:58 crc kubenswrapper[4904]: E0223 10:08:58.828740 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:08:59.328684074 +0000 UTC m=+172.749057597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.829601 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:58 crc kubenswrapper[4904]: E0223 10:08:58.829997 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:08:59.329985332 +0000 UTC m=+172.750358915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.943874 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:08:58 crc kubenswrapper[4904]: E0223 10:08:58.944953 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:08:59.444933602 +0000 UTC m=+172.865307115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.947280 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fmtkh" event={"ID":"8ab8a442-5053-40b9-8415-4d6f82295dcc","Type":"ContainerStarted","Data":"774f643f8d809df77a802370500d2d1d520d8ffe3bd92cc1d06f55aa55388641"} Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.956731 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-22k6t" event={"ID":"59e2a4d4-b37b-46f1-9937-045403839c98","Type":"ContainerStarted","Data":"d7fd39c97269b84755dfee54ead24021b5f932123df64dd1cc583bf1cebfddd6"} Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.968451 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kg9d6" event={"ID":"abb69e50-b84d-4499-a703-72e00ef6ff2a","Type":"ContainerStarted","Data":"8e654e64727a7782fa53fc9ec3154008f12e19d80e8178c361cbd4040c04daa6"} Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.971843 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gj9mg" event={"ID":"a666f5d4-c0d8-4714-b04d-64cf96e30345","Type":"ContainerStarted","Data":"71772ffac0835f3a5c29c7a8387ded9b2e3a531fee2699e1ac2c0c1b5d3436db"} Feb 23 10:08:58 crc kubenswrapper[4904]: I0223 10:08:58.978147 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vcxn6" event={"ID":"fc97320a-b3d3-4a5f-8dcf-34fb942f6669","Type":"ContainerStarted","Data":"4978dd2f829ab579aa22171f219cdc8871d41e0edd74fc5a5d77cf4ff46b1f69"} Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.039643 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkrln" event={"ID":"080d9e4a-a9f2-4e85-b386-b0f1aefe6c14","Type":"ContainerStarted","Data":"d7379304424cae33363eb2a8747c4439afae97ee2a508929f6e83d30b47bbb79"} Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.045190 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pw46x" event={"ID":"a5af91ff-1c77-4274-b318-fef6771be569","Type":"ContainerStarted","Data":"3b053e1caf1dfb59b920aa4dbcfea9d04e3c1a348c23f88d2963b231e875c7ee"} Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.049323 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-scdtm" event={"ID":"f76de875-328c-4a57-beac-43e3b38ed141","Type":"ContainerStarted","Data":"2fc006995e9c8e470059f3dabb1a42be3d90a8c16ba0c8df9a0c6dcde32a1cef"} Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.050854 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qr2nz" event={"ID":"1a168c48-91ab-4b62-ab5d-0599a0f7427a","Type":"ContainerStarted","Data":"6c4146e727a9755bb32eadd2617ef63fe5b8d847152672d7abdf4d2b5737025b"} Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.056839 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jnvdm" event={"ID":"945901ad-f721-4897-bca6-16436563e92c","Type":"ContainerStarted","Data":"a877210a05329bb4e61ae39df6f0af1fcf4556e20a56e7c0a1a2e599a8d5715b"} Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.056933 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-jnvdm" Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.059847 4904 patch_prober.go:28] interesting pod/downloads-7954f5f757-jnvdm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.059913 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jnvdm" podUID="945901ad-f721-4897-bca6-16436563e92c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.060285 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:59 crc kubenswrapper[4904]: E0223 10:08:59.063588 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:08:59.563575811 +0000 UTC m=+172.983949324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.069504 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.162354 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:08:59 crc kubenswrapper[4904]: E0223 10:08:59.162812 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:08:59.66277967 +0000 UTC m=+173.083153173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.347112 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:59 crc kubenswrapper[4904]: E0223 10:08:59.354978 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:08:59.854959265 +0000 UTC m=+173.275332778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.387182 4904 csr.go:261] certificate signing request csr-p25hp is approved, waiting to be issued Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.402146 4904 csr.go:257] certificate signing request csr-p25hp is issued Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.449040 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:08:59 crc kubenswrapper[4904]: E0223 10:08:59.449201 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:08:59.949175347 +0000 UTC m=+173.369548860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.449267 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:59 crc kubenswrapper[4904]: E0223 10:08:59.449644 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:08:59.94963559 +0000 UTC m=+173.370009103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.550176 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:08:59 crc kubenswrapper[4904]: E0223 10:08:59.550391 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:00.050368104 +0000 UTC m=+173.470741627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.550430 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:59 crc kubenswrapper[4904]: E0223 10:08:59.550744 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:00.050735225 +0000 UTC m=+173.471108738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.655453 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:08:59 crc kubenswrapper[4904]: E0223 10:08:59.656153 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:00.156123595 +0000 UTC m=+173.576497098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.665267 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ls9fb"] Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.678721 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mz6cv"] Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.757024 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:59 crc kubenswrapper[4904]: E0223 10:08:59.757343 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:00.257331382 +0000 UTC m=+173.677704895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.791144 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-bclzx" podStartSLOduration=104.791129703 podStartE2EDuration="1m44.791129703s" podCreationTimestamp="2026-02-23 10:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:08:59.786201859 +0000 UTC m=+173.206575372" watchObservedRunningTime="2026-02-23 10:08:59.791129703 +0000 UTC m=+173.211503216" Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.823540 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zlvdx" podStartSLOduration=103.823527213 podStartE2EDuration="1m43.823527213s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:08:59.822199634 +0000 UTC m=+173.242573147" watchObservedRunningTime="2026-02-23 10:08:59.823527213 +0000 UTC m=+173.243900726" Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.858325 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:08:59 crc kubenswrapper[4904]: E0223 10:08:59.858554 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:00.358515889 +0000 UTC m=+173.778889402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.858849 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:08:59 crc kubenswrapper[4904]: E0223 10:08:59.859241 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:00.35922923 +0000 UTC m=+173.779602743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.919260 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" podStartSLOduration=103.91924569 podStartE2EDuration="1m43.91924569s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:08:59.87047198 +0000 UTC m=+173.290845493" watchObservedRunningTime="2026-02-23 10:08:59.91924569 +0000 UTC m=+173.339619203" Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.920443 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-22k6t" podStartSLOduration=104.920436315 podStartE2EDuration="1m44.920436315s" podCreationTimestamp="2026-02-23 10:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:08:59.917619392 +0000 UTC m=+173.337992925" watchObservedRunningTime="2026-02-23 10:08:59.920436315 +0000 UTC m=+173.340809828" Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.960945 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:08:59 crc kubenswrapper[4904]: E0223 10:08:59.961333 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:00.461316813 +0000 UTC m=+173.881690326 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:08:59 crc kubenswrapper[4904]: I0223 10:08:59.994278 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qr2nz" podStartSLOduration=103.994264119 podStartE2EDuration="1m43.994264119s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:08:59.95301463 +0000 UTC m=+173.373388143" watchObservedRunningTime="2026-02-23 10:08:59.994264119 +0000 UTC m=+173.414637632" Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.030662 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vcxn6" podStartSLOduration=104.030646846 podStartE2EDuration="1m44.030646846s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:00.027984258 +0000 UTC m=+173.448357771" watchObservedRunningTime="2026-02-23 10:09:00.030646846 +0000 UTC m=+173.451020359" Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.062345 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:00 crc kubenswrapper[4904]: E0223 10:09:00.062640 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:00.562624124 +0000 UTC m=+173.982997637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.068302 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ls9fb" event={"ID":"de6f2397-7776-4997-a2d3-c6537471079f","Type":"ContainerStarted","Data":"881a95af015cf5ff13c37d083c3c25365a4bfab3274e744e56adc3825b4d5891"} Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.099793 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-scdtm" event={"ID":"f76de875-328c-4a57-beac-43e3b38ed141","Type":"ContainerStarted","Data":"f43574887f245eecb96ee6264d6095ef8ab358fbe4afe568eec94d1becc65791"} Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.108098 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbdkj" podStartSLOduration=105.108078756 podStartE2EDuration="1m45.108078756s" podCreationTimestamp="2026-02-23 10:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:00.064927651 +0000 UTC m=+173.485301164" watchObservedRunningTime="2026-02-23 10:09:00.108078756 +0000 UTC m=+173.528452269" Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.109294 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-jnvdm" podStartSLOduration=104.109278921 podStartE2EDuration="1m44.109278921s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:00.107670784 +0000 UTC m=+173.528044297" watchObservedRunningTime="2026-02-23 10:09:00.109278921 +0000 UTC m=+173.529652434" Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.148021 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fmtkh" event={"ID":"8ab8a442-5053-40b9-8415-4d6f82295dcc","Type":"ContainerStarted","Data":"35cd51e1e1d29397c325fa548a4d80d88fa08d4458573fb4b86017c60eb5dccd"} Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.165574 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hkrln" podStartSLOduration=105.165548831 podStartE2EDuration="1m45.165548831s" podCreationTimestamp="2026-02-23 10:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:00.158030481 +0000 UTC m=+173.578403994" watchObservedRunningTime="2026-02-23 10:09:00.165548831 +0000 UTC m=+173.585922354" Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.170692 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:00 crc kubenswrapper[4904]: E0223 10:09:00.171838 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:00.671819165 +0000 UTC m=+174.092192678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.172311 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kg9d6" event={"ID":"abb69e50-b84d-4499-a703-72e00ef6ff2a","Type":"ContainerStarted","Data":"a8a6bdb8f15260bcb4399c4afe51e766bd4dd9bcc76d687e3faf54552a69fffe"} Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.199375 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gj9mg" event={"ID":"a666f5d4-c0d8-4714-b04d-64cf96e30345","Type":"ContainerStarted","Data":"573427657f02d1907361faaf7202919c79a3f39df1ce7d8046b3f9b60388024d"} Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.199424 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gj9mg" event={"ID":"a666f5d4-c0d8-4714-b04d-64cf96e30345","Type":"ContainerStarted","Data":"9ab1151415deaaddee2250da6e9e0d43fe028132835ae0a87e954f925409a55c"} Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.224130 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-fmtkh" podStartSLOduration=5.224103908 podStartE2EDuration="5.224103908s" podCreationTimestamp="2026-02-23 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:00.220470752 +0000 UTC m=+173.640844295" watchObservedRunningTime="2026-02-23 10:09:00.224103908 +0000 UTC m=+173.644477421" Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.230840 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pw46x" event={"ID":"a5af91ff-1c77-4274-b318-fef6771be569","Type":"ContainerStarted","Data":"cbf09107e880c06a813843d3c01a67183eacfc63936ebe2c0e453f5825b6d1c5"} Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.230887 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pw46x" event={"ID":"a5af91ff-1c77-4274-b318-fef6771be569","Type":"ContainerStarted","Data":"6962ce2a75c92dc8f4cafccf3eac4dd8f72d0f2f583dc558faffc521fa9a5c8b"} Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.231697 4904 patch_prober.go:28] interesting pod/downloads-7954f5f757-jnvdm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.231780 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jnvdm" podUID="945901ad-f721-4897-bca6-16436563e92c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.258544 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-scdtm" podStartSLOduration=105.258522257 podStartE2EDuration="1m45.258522257s" podCreationTimestamp="2026-02-23 10:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:00.255097657 +0000 UTC m=+173.675471170" watchObservedRunningTime="2026-02-23 10:09:00.258522257 +0000 UTC m=+173.678895760" Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.272831 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:00 crc kubenswrapper[4904]: E0223 10:09:00.274813 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:00.774799215 +0000 UTC m=+174.195172728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.302283 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-gj9mg" podStartSLOduration=104.30226487 podStartE2EDuration="1m44.30226487s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:00.289782534 +0000 UTC m=+173.710156047" watchObservedRunningTime="2026-02-23 10:09:00.30226487 +0000 UTC m=+173.722638383" Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.337154 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-kg9d6" podStartSLOduration=104.337134562 podStartE2EDuration="1m44.337134562s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:00.334148935 +0000 UTC m=+173.754522448" watchObservedRunningTime="2026-02-23 10:09:00.337134562 +0000 UTC m=+173.757508075" Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.359321 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mz6cv"] Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.378128 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pw46x" podStartSLOduration=104.378112094 podStartE2EDuration="1m44.378112094s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:00.37455902 +0000 UTC m=+173.794932533" watchObservedRunningTime="2026-02-23 10:09:00.378112094 +0000 UTC m=+173.798485607" Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.380905 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-kg9d6" Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.381563 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:00 crc kubenswrapper[4904]: E0223 10:09:00.382585 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:00.882567574 +0000 UTC m=+174.302941087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.394612 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w"] Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.402869 4904 patch_prober.go:28] interesting pod/router-default-5444994796-kg9d6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 10:09:00 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Feb 23 10:09:00 crc kubenswrapper[4904]: [+]process-running ok Feb 23 10:09:00 crc kubenswrapper[4904]: healthz check failed Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.402920 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kg9d6" podUID="abb69e50-b84d-4499-a703-72e00ef6ff2a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.407836 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-23 10:03:59 +0000 UTC, rotation deadline is 2026-12-12 15:04:21.223889713 +0000 UTC Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.407873 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7012h55m20.816019717s for next certificate rotation Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.487134 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:00 crc kubenswrapper[4904]: E0223 10:09:00.487526 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:00.987513232 +0000 UTC m=+174.407886745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.490741 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bvb56"] Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.490885 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m84cg"] Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.518810 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xnfq7"] Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.524179 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-p5d7h"] Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.551064 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8v5w5"] Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.590664 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:00 crc kubenswrapper[4904]: E0223 10:09:00.601062 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:01.101042 +0000 UTC m=+174.521415513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.650421 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2lhs9"] Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.658802 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dqz6v"] Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.669949 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5m7h4"] Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.673748 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5bfn9"] Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.674760 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-48sfc"] Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.679569 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cb2c7"] Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.682244 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bm6q7"] Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.683753 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp"] Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.701637 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:00 crc kubenswrapper[4904]: E0223 10:09:00.701999 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:01.20198629 +0000 UTC m=+174.622359803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.704047 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ctv9r"] Feb 23 10:09:00 crc kubenswrapper[4904]: W0223 10:09:00.713809 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74d9be42_ba8c_426b_b4b0_bad0bc65648b.slice/crio-478048890c09a8ee1aa9b4c5160738db11279d2032d3b23666e95a03d028518f WatchSource:0}: Error finding container 478048890c09a8ee1aa9b4c5160738db11279d2032d3b23666e95a03d028518f: Status 404 returned error can't find the container with id 478048890c09a8ee1aa9b4c5160738db11279d2032d3b23666e95a03d028518f Feb 23 10:09:00 crc kubenswrapper[4904]: W0223 10:09:00.759405 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8c284af_08a1_403a_9f3e_f7ed477a1bf9.slice/crio-7461a459787d0dae6f915803512ceaea93ebb61347693a4a43f1724133975f89 WatchSource:0}: Error finding container 7461a459787d0dae6f915803512ceaea93ebb61347693a4a43f1724133975f89: Status 404 returned error can't find the container with id 7461a459787d0dae6f915803512ceaea93ebb61347693a4a43f1724133975f89 Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.765285 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z6f26"] Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.782859 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-x6bcw"] Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.804428 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:00 crc kubenswrapper[4904]: E0223 10:09:00.806930 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:01.306892416 +0000 UTC m=+174.727265929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.807145 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:00 crc kubenswrapper[4904]: E0223 10:09:00.807493 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:01.307481803 +0000 UTC m=+174.727855316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.810043 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-d8mh2"] Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.819522 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vfhxr"] Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.822183 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530680-dwm5z"] Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.824593 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-54qmm"] Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.831562 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-npj6g"] Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.833011 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-j9m4p"] Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.840683 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrs2w"] Feb 23 10:09:00 crc kubenswrapper[4904]: W0223 10:09:00.899184 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b06eca1_c860_4af5_b973_ff30c046462b.slice/crio-cf44804eca0b3609e2a0f5284359b11f102d6a68981dd23f20a61a5e8ea2189f WatchSource:0}: Error finding container cf44804eca0b3609e2a0f5284359b11f102d6a68981dd23f20a61a5e8ea2189f: Status 404 returned error can't find the container with id cf44804eca0b3609e2a0f5284359b11f102d6a68981dd23f20a61a5e8ea2189f Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.907742 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-q92ld"] Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.908110 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:00 crc kubenswrapper[4904]: E0223 10:09:00.909125 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:01.409099473 +0000 UTC m=+174.829472986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:00 crc kubenswrapper[4904]: W0223 10:09:00.928136 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccac25dc_84f1_4da8_b7cb_e5e9b4985021.slice/crio-4bd20b949304d25646e9a7d447f5ec6daa53f2fe52a1bb332eb231c88645a9db WatchSource:0}: Error finding container 4bd20b949304d25646e9a7d447f5ec6daa53f2fe52a1bb332eb231c88645a9db: Status 404 returned error can't find the container with id 4bd20b949304d25646e9a7d447f5ec6daa53f2fe52a1bb332eb231c88645a9db Feb 23 10:09:00 crc kubenswrapper[4904]: I0223 10:09:00.964884 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-787c5"] Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.010037 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:01 crc kubenswrapper[4904]: E0223 10:09:01.010927 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:01.510916528 +0000 UTC m=+174.931290041 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:01 crc kubenswrapper[4904]: W0223 10:09:01.078824 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda249d265_3593_41f9_ac65_9a2b60d436cb.slice/crio-a5b827c13f9fef980556e029eebf0cffe6bf5fc3f212e90fc01ba8a3e4ebe1a3 WatchSource:0}: Error finding container a5b827c13f9fef980556e029eebf0cffe6bf5fc3f212e90fc01ba8a3e4ebe1a3: Status 404 returned error can't find the container with id a5b827c13f9fef980556e029eebf0cffe6bf5fc3f212e90fc01ba8a3e4ebe1a3 Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.113187 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:01 crc kubenswrapper[4904]: E0223 10:09:01.113507 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:01.613480985 +0000 UTC m=+175.033854498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.214449 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:01 crc kubenswrapper[4904]: E0223 10:09:01.215242 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:01.715229549 +0000 UTC m=+175.135603062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.315369 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:01 crc kubenswrapper[4904]: E0223 10:09:01.315535 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:01.815510209 +0000 UTC m=+175.235883722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.315669 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:01 crc kubenswrapper[4904]: E0223 10:09:01.315941 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:01.815929751 +0000 UTC m=+175.236303264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.329450 4904 patch_prober.go:28] interesting pod/console-operator-58897d9998-ls9fb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.349946 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-ls9fb" podUID="de6f2397-7776-4997-a2d3-c6537471079f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.349826 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-ls9fb" podStartSLOduration=105.349808885 podStartE2EDuration="1m45.349808885s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:01.34898753 +0000 UTC m=+174.769361043" watchObservedRunningTime="2026-02-23 10:09:01.349808885 +0000 UTC m=+174.770182398" Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.361921 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-ls9fb" Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.361951 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrs2w" event={"ID":"14a8ee7a-0483-4a58-a9e4-7b26248e998b","Type":"ContainerStarted","Data":"7c5e52ce378b22e59f90f0da1a862f1cbccd4c7bb497c241cbcc43c016520556"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.361973 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z6f26" event={"ID":"02accb03-9a1d-4244-ae51-40a2e1967ab7","Type":"ContainerStarted","Data":"e3e85e582e4d9389c477bf223f0ec6414fe09d56d631c6a0a0da2903a1217ee1"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.361986 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-54qmm" event={"ID":"bbb0c3de-7fa6-4cbe-a56c-2fc7b9ccaeaa","Type":"ContainerStarted","Data":"26da87b55b9d1aaa4e1e57045874dc9b5f5a9f7da9a02efd1758a28c8350ffb7"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.361999 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5m7h4" event={"ID":"5dc90b39-0709-4365-a42f-c8f8330f0be0","Type":"ContainerStarted","Data":"18f8b3ee4c8c80b814e62b4c8f63ed109c6aa31853981232687587c22da12057"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.362011 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ls9fb" event={"ID":"de6f2397-7776-4997-a2d3-c6537471079f","Type":"ContainerStarted","Data":"842a9c90cb4bc05629b9b37fda7f01fce13ed69bdb8187833ff3ed3ae79fbcd1"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.403003 4904 patch_prober.go:28] interesting pod/router-default-5444994796-kg9d6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 10:09:01 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Feb 23 10:09:01 crc kubenswrapper[4904]: [+]process-running ok Feb 23 10:09:01 crc kubenswrapper[4904]: healthz check failed Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.403045 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kg9d6" podUID="abb69e50-b84d-4499-a703-72e00ef6ff2a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.416977 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:01 crc kubenswrapper[4904]: E0223 10:09:01.418341 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:01.918321693 +0000 UTC m=+175.338695206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.432406 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-48sfc" event={"ID":"b6031dca-3a8a-48a8-8dc5-b887407ef01e","Type":"ContainerStarted","Data":"fa78fc7e5beac26335a4ddf02c64e07f7e2f01ae2fa37bbd443ba0efdfa5e7ea"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.435888 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-787c5" event={"ID":"a249d265-3593-41f9-ac65-9a2b60d436cb","Type":"ContainerStarted","Data":"a5b827c13f9fef980556e029eebf0cffe6bf5fc3f212e90fc01ba8a3e4ebe1a3"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.443655 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8v5w5" event={"ID":"6b39937b-9604-4c58-b4f5-fea879103d21","Type":"ContainerStarted","Data":"b670f74794630200705b899ea4f19363282607cc91002e45b235f02f604f6fd6"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.443719 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8v5w5" event={"ID":"6b39937b-9604-4c58-b4f5-fea879103d21","Type":"ContainerStarted","Data":"7e23600646712fde433010ded65086dc19df75c38fed268a4e6bf5cd101d8109"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.473650 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ctv9r" event={"ID":"f2354c5a-f2fc-41e9-85ec-0ec2498ede86","Type":"ContainerStarted","Data":"d877f6ae162594b2d39fa3846d3766ec002a96f62d91dae6b205b7fb5d82c3b2"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.477069 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ctv9r" Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.484068 4904 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-ctv9r container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.484153 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ctv9r" podUID="f2354c5a-f2fc-41e9-85ec-0ec2498ede86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.484943 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-8v5w5" podStartSLOduration=105.484924096 podStartE2EDuration="1m45.484924096s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:01.476740766 +0000 UTC m=+174.897114279" watchObservedRunningTime="2026-02-23 10:09:01.484924096 +0000 UTC m=+174.905297609" Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.485180 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2lhs9" event={"ID":"733bfc69-81cd-4b47-b8ee-2380f9a728b6","Type":"ContainerStarted","Data":"7dd2f65d53cf82f482fcea6c9a1eb522115f831536a1454cf98679d32fa7ae06"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.508313 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ctv9r" podStartSLOduration=105.508291721 podStartE2EDuration="1m45.508291721s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:01.506725725 +0000 UTC m=+174.927099238" watchObservedRunningTime="2026-02-23 10:09:01.508291721 +0000 UTC m=+174.928665224" Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.516594 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q92ld" event={"ID":"ccac25dc-84f1-4da8-b7cb-e5e9b4985021","Type":"ContainerStarted","Data":"4bd20b949304d25646e9a7d447f5ec6daa53f2fe52a1bb332eb231c88645a9db"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.518641 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:01 crc kubenswrapper[4904]: E0223 10:09:01.519093 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:02.019078217 +0000 UTC m=+175.439451730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.519137 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" event={"ID":"03eb9af6-09e2-4d8b-ac64-c924049bebee","Type":"ContainerStarted","Data":"5e4b88dea6b49cfb986a3b70c154a76f3a70572aac3f8a4a2bcf71f35a9bf68c"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.541583 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8mh2" event={"ID":"588b4893-86b2-4451-930b-48aee4c00761","Type":"ContainerStarted","Data":"076d59c1628fee84813bd9f89cebc9dc92ae898ce3a887b033b8be2d36426fd5"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.557322 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" event={"ID":"bcf306b5-f8ab-4774-8163-c2a2b47f1940","Type":"ContainerStarted","Data":"08b0e5b8f1f49bf8691c5b9e07b6199cda4f8743a0e59b2805f3fa47e0738399"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.557369 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" event={"ID":"bcf306b5-f8ab-4774-8163-c2a2b47f1940","Type":"ContainerStarted","Data":"7f077a167b3efa705f2358b3e45a6a7673dd3336905acd297d86d6f529daf141"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.557434 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" podUID="bcf306b5-f8ab-4774-8163-c2a2b47f1940" containerName="controller-manager" containerID="cri-o://08b0e5b8f1f49bf8691c5b9e07b6199cda4f8743a0e59b2805f3fa47e0738399" gracePeriod=30 Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.557662 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.559284 4904 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-mz6cv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.559322 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" podUID="bcf306b5-f8ab-4774-8163-c2a2b47f1940" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.565066 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xnfq7" event={"ID":"fb5022ad-ae93-40a3-a025-c192c3d4ead4","Type":"ContainerStarted","Data":"1514e96d1bc83088c21a28d7398f6be6c0c73e2747f1b78b98d97dcf63b03107"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.565116 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xnfq7" event={"ID":"fb5022ad-ae93-40a3-a025-c192c3d4ead4","Type":"ContainerStarted","Data":"7cfe5ace7698f9d37a072e793d6e8df4a44f8675f7cfb909af1cb48e79a1ee2a"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.586596 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dqz6v" event={"ID":"6cb983a4-cc92-48b0-b722-920f10ca60f1","Type":"ContainerStarted","Data":"cc889514b92d0e8f7077dd21ecf443fd55a375bba15376750f6275bb4c45d1e4"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.592887 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" podStartSLOduration=105.592870781 podStartE2EDuration="1m45.592870781s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:01.590121731 +0000 UTC m=+175.010495244" watchObservedRunningTime="2026-02-23 10:09:01.592870781 +0000 UTC m=+175.013244294" Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.606946 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bm6q7" event={"ID":"fb367911-0f91-48b5-badb-1338ea2de5c1","Type":"ContainerStarted","Data":"27e15320e5a0cfa5cf81afc4d0fec6e85f6dbf134a783ae0261e92cb7d572ed0"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.607005 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bm6q7" event={"ID":"fb367911-0f91-48b5-badb-1338ea2de5c1","Type":"ContainerStarted","Data":"1bcb1fa0ce56c185641c75f92c7ab75db48b3dfdfde8312727a39b0678c62009"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.608051 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-xnfq7" podStartSLOduration=105.608036006 podStartE2EDuration="1m45.608036006s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:01.607247663 +0000 UTC m=+175.027621176" watchObservedRunningTime="2026-02-23 10:09:01.608036006 +0000 UTC m=+175.028409519" Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.608223 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-npj6g" event={"ID":"881a7096-caa3-4abc-8a27-f9544300a130","Type":"ContainerStarted","Data":"1c3d1270f35f00b3fbb0b97a5aa317839fee3c8efcdea794d6881fbef3b2716b"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.609244 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cb2c7" event={"ID":"c8c284af-08a1-403a-9f3e-f7ed477a1bf9","Type":"ContainerStarted","Data":"7461a459787d0dae6f915803512ceaea93ebb61347693a4a43f1724133975f89"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.609959 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vfhxr" event={"ID":"2b06eca1-c860-4af5-b973-ff30c046462b","Type":"ContainerStarted","Data":"cf44804eca0b3609e2a0f5284359b11f102d6a68981dd23f20a61a5e8ea2189f"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.614881 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bvb56" event={"ID":"2e93b452-62da-4c18-953a-231a397caa58","Type":"ContainerStarted","Data":"3a5eff3a801c66acd08a1c8a94baed03002245b492b48481a76b46fec05230a8"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.614913 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bvb56" event={"ID":"2e93b452-62da-4c18-953a-231a397caa58","Type":"ContainerStarted","Data":"dc913030025256878d2bc052d1128b5fc1ed259c659be2ffc804b2cdb8307ead"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.619815 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:01 crc kubenswrapper[4904]: E0223 10:09:01.621078 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:02.121059758 +0000 UTC m=+175.541433271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.621135 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.621487 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.627477 4904 generic.go:334] "Generic (PLEG): container finished" podID="a5f464dd-932e-40b1-98d2-81437ad39aab" containerID="c7a64b6bbd0a98a299c459f707634dd3badd3d22c8b13090ad84df4164993678" exitCode=0 Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.627544 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m84cg" event={"ID":"a5f464dd-932e-40b1-98d2-81437ad39aab","Type":"ContainerDied","Data":"c7a64b6bbd0a98a299c459f707634dd3badd3d22c8b13090ad84df4164993678"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.627571 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m84cg" event={"ID":"a5f464dd-932e-40b1-98d2-81437ad39aab","Type":"ContainerStarted","Data":"c503380a7b1c366bddcfda7095cc6130863bf5e22cce84c7ab41fb823c1153aa"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.633859 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-j9m4p" event={"ID":"29d58264-b6c1-46e9-b283-95faf671c7de","Type":"ContainerStarted","Data":"9d77b9ddb1844b65060a9a26e84eb4b8f7b2d5005191215725f69cc63821c504"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.635538 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" event={"ID":"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66","Type":"ContainerStarted","Data":"16a98c868c5b4e6d828941e09457aef062c9081fd84cd85a06b3c29fe92320d2"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.635566 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" event={"ID":"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66","Type":"ContainerStarted","Data":"d8ac97d6c1c9b484d5c46b6862660421e4f4844c681f856f67067be85f18d512"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.636586 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.639195 4904 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-p5d7h container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.639228 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" podUID="eb0c35a8-1cd9-4584-b2a4-27c6896e9e66" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.640824 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x6bcw" event={"ID":"abc78ff8-2055-4dbe-ae4e-67061adfe881","Type":"ContainerStarted","Data":"a5f2de12b07db53a1de4d96558e2e8997575c52b3fd098aec47860a299d43173"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.652283 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bvb56" podStartSLOduration=105.652268663 podStartE2EDuration="1m45.652268663s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:01.650389438 +0000 UTC m=+175.070762951" watchObservedRunningTime="2026-02-23 10:09:01.652268663 +0000 UTC m=+175.072642176" Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.660565 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5bfn9" event={"ID":"74d9be42-ba8c-426b-b4b0-bad0bc65648b","Type":"ContainerStarted","Data":"bb2992648e779ed50920e7653b5883f606c9e678933be0e56e072a7475f31f83"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.660609 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5bfn9" event={"ID":"74d9be42-ba8c-426b-b4b0-bad0bc65648b","Type":"ContainerStarted","Data":"478048890c09a8ee1aa9b4c5160738db11279d2032d3b23666e95a03d028518f"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.660766 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5bfn9" Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.662009 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-dwm5z" event={"ID":"d2c1c227-0297-4ba0-9acb-4690cffd0554","Type":"ContainerStarted","Data":"ce882cf8ff2af1e7ca0a773117291cc129d0e6e46d6e94419291afe0edfd9cc6"} Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.665679 4904 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5bfn9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.665751 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5bfn9" podUID="74d9be42-ba8c-426b-b4b0-bad0bc65648b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.681738 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" podStartSLOduration=106.681700386 podStartE2EDuration="1m46.681700386s" podCreationTimestamp="2026-02-23 10:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:01.67912586 +0000 UTC m=+175.099499393" watchObservedRunningTime="2026-02-23 10:09:01.681700386 +0000 UTC m=+175.102073899" Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.721278 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:01 crc kubenswrapper[4904]: E0223 10:09:01.722571 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:02.222560134 +0000 UTC m=+175.642933637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:01 crc kubenswrapper[4904]: E0223 10:09:01.825478 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:02.325455951 +0000 UTC m=+175.745829464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.823563 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.829738 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:01 crc kubenswrapper[4904]: E0223 10:09:01.830746 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:02.330734035 +0000 UTC m=+175.751107548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:01 crc kubenswrapper[4904]: I0223 10:09:01.938148 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:01 crc kubenswrapper[4904]: E0223 10:09:01.938549 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:02.438530716 +0000 UTC m=+175.858904329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.040649 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:02 crc kubenswrapper[4904]: E0223 10:09:02.041013 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:02.540997181 +0000 UTC m=+175.961370694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.141264 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:02 crc kubenswrapper[4904]: E0223 10:09:02.142122 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:02.642104624 +0000 UTC m=+176.062478147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.244568 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:02 crc kubenswrapper[4904]: E0223 10:09:02.245103 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:02.745087964 +0000 UTC m=+176.165461487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:02 crc kubenswrapper[4904]: E0223 10:09:02.345808 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:02.845779976 +0000 UTC m=+176.266153489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.345698 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.346021 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:02 crc kubenswrapper[4904]: E0223 10:09:02.346407 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:02.846397644 +0000 UTC m=+176.266771157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.384523 4904 patch_prober.go:28] interesting pod/router-default-5444994796-kg9d6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 10:09:02 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Feb 23 10:09:02 crc kubenswrapper[4904]: [+]process-running ok Feb 23 10:09:02 crc kubenswrapper[4904]: healthz check failed Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.384587 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kg9d6" podUID="abb69e50-b84d-4499-a703-72e00ef6ff2a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.419925 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-879f6c89f-mz6cv_bcf306b5-f8ab-4774-8163-c2a2b47f1940/controller-manager/0.log" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.420008 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.447878 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:02 crc kubenswrapper[4904]: E0223 10:09:02.448550 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:02.948531259 +0000 UTC m=+176.368904772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.452053 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-5bfn9" podStartSLOduration=106.452017151 podStartE2EDuration="1m46.452017151s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:01.732741832 +0000 UTC m=+175.153115345" watchObservedRunningTime="2026-02-23 10:09:02.452017151 +0000 UTC m=+175.872390664" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.463896 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7d97bd54d7-9pds6"] Feb 23 10:09:02 crc kubenswrapper[4904]: E0223 10:09:02.464259 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf306b5-f8ab-4774-8163-c2a2b47f1940" containerName="controller-manager" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.464277 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf306b5-f8ab-4774-8163-c2a2b47f1940" containerName="controller-manager" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.464418 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf306b5-f8ab-4774-8163-c2a2b47f1940" containerName="controller-manager" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.465216 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.477635 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d97bd54d7-9pds6"] Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.549648 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcf306b5-f8ab-4774-8163-c2a2b47f1940-serving-cert\") pod \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\" (UID: \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\") " Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.549822 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcf306b5-f8ab-4774-8163-c2a2b47f1940-config\") pod \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\" (UID: \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\") " Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.549857 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bcf306b5-f8ab-4774-8163-c2a2b47f1940-proxy-ca-bundles\") pod \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\" (UID: \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\") " Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.549928 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcf306b5-f8ab-4774-8163-c2a2b47f1940-client-ca\") pod \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\" (UID: \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\") " Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.550059 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx75x\" (UniqueName: \"kubernetes.io/projected/bcf306b5-f8ab-4774-8163-c2a2b47f1940-kube-api-access-wx75x\") pod \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\" (UID: \"bcf306b5-f8ab-4774-8163-c2a2b47f1940\") " Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.550210 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.551881 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcf306b5-f8ab-4774-8163-c2a2b47f1940-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bcf306b5-f8ab-4774-8163-c2a2b47f1940" (UID: "bcf306b5-f8ab-4774-8163-c2a2b47f1940"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.553979 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcf306b5-f8ab-4774-8163-c2a2b47f1940-client-ca" (OuterVolumeSpecName: "client-ca") pod "bcf306b5-f8ab-4774-8163-c2a2b47f1940" (UID: "bcf306b5-f8ab-4774-8163-c2a2b47f1940"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:09:02 crc kubenswrapper[4904]: E0223 10:09:02.554264 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:03.054250888 +0000 UTC m=+176.474624391 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.564401 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcf306b5-f8ab-4774-8163-c2a2b47f1940-config" (OuterVolumeSpecName: "config") pod "bcf306b5-f8ab-4774-8163-c2a2b47f1940" (UID: "bcf306b5-f8ab-4774-8163-c2a2b47f1940"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.576133 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcf306b5-f8ab-4774-8163-c2a2b47f1940-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bcf306b5-f8ab-4774-8163-c2a2b47f1940" (UID: "bcf306b5-f8ab-4774-8163-c2a2b47f1940"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.580453 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcf306b5-f8ab-4774-8163-c2a2b47f1940-kube-api-access-wx75x" (OuterVolumeSpecName: "kube-api-access-wx75x") pod "bcf306b5-f8ab-4774-8163-c2a2b47f1940" (UID: "bcf306b5-f8ab-4774-8163-c2a2b47f1940"). InnerVolumeSpecName "kube-api-access-wx75x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.651244 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:02 crc kubenswrapper[4904]: E0223 10:09:02.651696 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:03.151668385 +0000 UTC m=+176.572041898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.651753 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.651798 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1956dc12-a0b1-4439-b13a-3ffc15700f02-serving-cert\") pod \"controller-manager-7d97bd54d7-9pds6\" (UID: \"1956dc12-a0b1-4439-b13a-3ffc15700f02\") " pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.651835 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1956dc12-a0b1-4439-b13a-3ffc15700f02-config\") pod \"controller-manager-7d97bd54d7-9pds6\" (UID: \"1956dc12-a0b1-4439-b13a-3ffc15700f02\") " pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.651852 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28fdr\" (UniqueName: \"kubernetes.io/projected/1956dc12-a0b1-4439-b13a-3ffc15700f02-kube-api-access-28fdr\") pod \"controller-manager-7d97bd54d7-9pds6\" (UID: \"1956dc12-a0b1-4439-b13a-3ffc15700f02\") " pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.651886 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1956dc12-a0b1-4439-b13a-3ffc15700f02-proxy-ca-bundles\") pod \"controller-manager-7d97bd54d7-9pds6\" (UID: \"1956dc12-a0b1-4439-b13a-3ffc15700f02\") " pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.652025 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1956dc12-a0b1-4439-b13a-3ffc15700f02-client-ca\") pod \"controller-manager-7d97bd54d7-9pds6\" (UID: \"1956dc12-a0b1-4439-b13a-3ffc15700f02\") " pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.652071 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcf306b5-f8ab-4774-8163-c2a2b47f1940-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.652081 4904 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bcf306b5-f8ab-4774-8163-c2a2b47f1940-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.652090 4904 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcf306b5-f8ab-4774-8163-c2a2b47f1940-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.652098 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wx75x\" (UniqueName: \"kubernetes.io/projected/bcf306b5-f8ab-4774-8163-c2a2b47f1940-kube-api-access-wx75x\") on node \"crc\" DevicePath \"\"" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.652106 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcf306b5-f8ab-4774-8163-c2a2b47f1940-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:09:02 crc kubenswrapper[4904]: E0223 10:09:02.652177 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:03.152168539 +0000 UTC m=+176.572542052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.680893 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-j9m4p" event={"ID":"29d58264-b6c1-46e9-b283-95faf671c7de","Type":"ContainerStarted","Data":"ce74145b162ea8b1828513480707468aa195f82ade25895a54ae54664993c2ef"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.691750 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ctv9r" event={"ID":"f2354c5a-f2fc-41e9-85ec-0ec2498ede86","Type":"ContainerStarted","Data":"cdc06ac357a4e22f74004cedde3e9f69b6e97acbb6492af26b7757f16214f0a5"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.692795 4904 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-ctv9r container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.692831 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ctv9r" podUID="f2354c5a-f2fc-41e9-85ec-0ec2498ede86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.704450 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2lhs9" event={"ID":"733bfc69-81cd-4b47-b8ee-2380f9a728b6","Type":"ContainerStarted","Data":"c3edeba0f91999f286037ce06b34b7840a68fde4a5ff63c391eb6bf9001ec7aa"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.704496 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2lhs9" event={"ID":"733bfc69-81cd-4b47-b8ee-2380f9a728b6","Type":"ContainerStarted","Data":"5b94d3a1bf740303fda0a3c0e1715d3cae1d9b5f650af1846f11bb52f4ae6666"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.705244 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-2lhs9" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.708180 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-j9m4p" podStartSLOduration=106.708153381 podStartE2EDuration="1m46.708153381s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:02.707475601 +0000 UTC m=+176.127849114" watchObservedRunningTime="2026-02-23 10:09:02.708153381 +0000 UTC m=+176.128526894" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.726102 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-54qmm" event={"ID":"bbb0c3de-7fa6-4cbe-a56c-2fc7b9ccaeaa","Type":"ContainerStarted","Data":"f2cb915dc10743c85aad65b476740e2d78a0e9baef283431dbe797058c8d43a8"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.727914 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-787c5" event={"ID":"a249d265-3593-41f9-ac65-9a2b60d436cb","Type":"ContainerStarted","Data":"d08330fd7d823aa71f3a91c05c4bce3afe90c6d2b48ccb2f37d75b77fdcfe17f"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.745079 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q92ld" event={"ID":"ccac25dc-84f1-4da8-b7cb-e5e9b4985021","Type":"ContainerStarted","Data":"24a0a959ce02011a08aefc643f1a51b1ae1f1807edebd488b0c8a107bb0e2b6f"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.745124 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q92ld" event={"ID":"ccac25dc-84f1-4da8-b7cb-e5e9b4985021","Type":"ContainerStarted","Data":"76778ce7e8a335e73cfd67924d197c70d4f36d5faa739631604223dc8e6ac536"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.747658 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-2lhs9" podStartSLOduration=7.747634368 podStartE2EDuration="7.747634368s" podCreationTimestamp="2026-02-23 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:02.739083547 +0000 UTC m=+176.159457060" watchObservedRunningTime="2026-02-23 10:09:02.747634368 +0000 UTC m=+176.168007881" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.753415 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:02 crc kubenswrapper[4904]: E0223 10:09:02.753552 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:03.253531031 +0000 UTC m=+176.673904534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.753634 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1956dc12-a0b1-4439-b13a-3ffc15700f02-config\") pod \"controller-manager-7d97bd54d7-9pds6\" (UID: \"1956dc12-a0b1-4439-b13a-3ffc15700f02\") " pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.753659 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28fdr\" (UniqueName: \"kubernetes.io/projected/1956dc12-a0b1-4439-b13a-3ffc15700f02-kube-api-access-28fdr\") pod \"controller-manager-7d97bd54d7-9pds6\" (UID: \"1956dc12-a0b1-4439-b13a-3ffc15700f02\") " pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.753731 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1956dc12-a0b1-4439-b13a-3ffc15700f02-proxy-ca-bundles\") pod \"controller-manager-7d97bd54d7-9pds6\" (UID: \"1956dc12-a0b1-4439-b13a-3ffc15700f02\") " pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.753789 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1956dc12-a0b1-4439-b13a-3ffc15700f02-client-ca\") pod \"controller-manager-7d97bd54d7-9pds6\" (UID: \"1956dc12-a0b1-4439-b13a-3ffc15700f02\") " pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.753839 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.753869 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1956dc12-a0b1-4439-b13a-3ffc15700f02-serving-cert\") pod \"controller-manager-7d97bd54d7-9pds6\" (UID: \"1956dc12-a0b1-4439-b13a-3ffc15700f02\") " pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" Feb 23 10:09:02 crc kubenswrapper[4904]: E0223 10:09:02.754760 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:03.254745047 +0000 UTC m=+176.675118560 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.755208 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1956dc12-a0b1-4439-b13a-3ffc15700f02-proxy-ca-bundles\") pod \"controller-manager-7d97bd54d7-9pds6\" (UID: \"1956dc12-a0b1-4439-b13a-3ffc15700f02\") " pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.755211 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1956dc12-a0b1-4439-b13a-3ffc15700f02-config\") pod \"controller-manager-7d97bd54d7-9pds6\" (UID: \"1956dc12-a0b1-4439-b13a-3ffc15700f02\") " pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.755302 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1956dc12-a0b1-4439-b13a-3ffc15700f02-client-ca\") pod \"controller-manager-7d97bd54d7-9pds6\" (UID: \"1956dc12-a0b1-4439-b13a-3ffc15700f02\") " pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.756187 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x6bcw" event={"ID":"abc78ff8-2055-4dbe-ae4e-67061adfe881","Type":"ContainerStarted","Data":"1886f600045dfa4ec05e0e351c9ffa0f87809c72ae3a28d3b68307b0fc7671ef"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.762249 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1956dc12-a0b1-4439-b13a-3ffc15700f02-serving-cert\") pod \"controller-manager-7d97bd54d7-9pds6\" (UID: \"1956dc12-a0b1-4439-b13a-3ffc15700f02\") " pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.774298 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-787c5" podStartSLOduration=106.77428206 podStartE2EDuration="1m46.77428206s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:02.773096865 +0000 UTC m=+176.193470378" watchObservedRunningTime="2026-02-23 10:09:02.77428206 +0000 UTC m=+176.194655573" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.792368 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-48sfc" event={"ID":"b6031dca-3a8a-48a8-8dc5-b887407ef01e","Type":"ContainerStarted","Data":"c758c9f585220af7973bcf5ea7300824d7743343b3eb4f482eb00477c26a774e"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.793319 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28fdr\" (UniqueName: \"kubernetes.io/projected/1956dc12-a0b1-4439-b13a-3ffc15700f02-kube-api-access-28fdr\") pod \"controller-manager-7d97bd54d7-9pds6\" (UID: \"1956dc12-a0b1-4439-b13a-3ffc15700f02\") " pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.804984 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-48sfc" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.814972 4904 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-48sfc container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.815027 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-48sfc" podUID="b6031dca-3a8a-48a8-8dc5-b887407ef01e" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.828946 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m84cg" event={"ID":"a5f464dd-932e-40b1-98d2-81437ad39aab","Type":"ContainerStarted","Data":"681e440e0430abd7a56d7806a37634a0e40ad3e4683bdd1d9a6dcef137fbce99"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.829927 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m84cg" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.840583 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-x6bcw" podStartSLOduration=106.840565263 podStartE2EDuration="1m46.840565263s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:02.839018538 +0000 UTC m=+176.259392051" watchObservedRunningTime="2026-02-23 10:09:02.840565263 +0000 UTC m=+176.260938776" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.841175 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q92ld" podStartSLOduration=106.841167471 podStartE2EDuration="1m46.841167471s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:02.804824945 +0000 UTC m=+176.225198458" watchObservedRunningTime="2026-02-23 10:09:02.841167471 +0000 UTC m=+176.261540984" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.841991 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z6f26" event={"ID":"02accb03-9a1d-4244-ae51-40a2e1967ab7","Type":"ContainerStarted","Data":"8f11f49398d8677a9eec5bbdbd124d196d78876fac9f3841c99671af7caa9066"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.843010 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z6f26" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.846566 4904 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-z6f26 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": dial tcp 10.217.0.23:5443: connect: connection refused" start-of-body= Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.846609 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z6f26" podUID="02accb03-9a1d-4244-ae51-40a2e1967ab7" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": dial tcp 10.217.0.23:5443: connect: connection refused" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.857636 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:02 crc kubenswrapper[4904]: E0223 10:09:02.857833 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:03.357808539 +0000 UTC m=+176.778182052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.858406 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:02 crc kubenswrapper[4904]: E0223 10:09:02.860960 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:03.360945931 +0000 UTC m=+176.781319444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.872649 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cb2c7" event={"ID":"c8c284af-08a1-403a-9f3e-f7ed477a1bf9","Type":"ContainerStarted","Data":"9377259867f3382cfd2c4a806c8d1a23720f6c42a5a3dee7fc2e8acdc77d4fdd"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.873551 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m84cg" podStartSLOduration=107.87354084 podStartE2EDuration="1m47.87354084s" podCreationTimestamp="2026-02-23 10:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:02.873023025 +0000 UTC m=+176.293396538" watchObservedRunningTime="2026-02-23 10:09:02.87354084 +0000 UTC m=+176.293914353" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.907615 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vfhxr" event={"ID":"2b06eca1-c860-4af5-b973-ff30c046462b","Type":"ContainerStarted","Data":"1f4730f0445eb4410c8feb5af06a2d85fe99d3fa9b457a04f3c4012e3d7d6f2f"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.907673 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vfhxr" event={"ID":"2b06eca1-c860-4af5-b973-ff30c046462b","Type":"ContainerStarted","Data":"6f6b8e74549690ba0dcf40157d6516ea22f1005dbe2bb6b05adad03bccf60ec8"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.907926 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vfhxr" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.914706 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-48sfc" podStartSLOduration=106.914694207 podStartE2EDuration="1m46.914694207s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:02.913853762 +0000 UTC m=+176.334227275" watchObservedRunningTime="2026-02-23 10:09:02.914694207 +0000 UTC m=+176.335067720" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.916022 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5m7h4" event={"ID":"5dc90b39-0709-4365-a42f-c8f8330f0be0","Type":"ContainerStarted","Data":"d2bccd8a0ed4a94ccb07eadfb478f25e358bf84bce58b1453150a5300c44268a"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.923081 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8mh2" event={"ID":"588b4893-86b2-4451-930b-48aee4c00761","Type":"ContainerStarted","Data":"f63e58acd814d3fe72509df7f5805bf62805db31972dc3d4be9e266b42c20a9f"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.923127 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8mh2" event={"ID":"588b4893-86b2-4451-930b-48aee4c00761","Type":"ContainerStarted","Data":"b0b5ac292b2079d48494d089718808ed0f6a1795a5ff3056f13598d5b778ad1f"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.926226 4904 generic.go:334] "Generic (PLEG): container finished" podID="03eb9af6-09e2-4d8b-ac64-c924049bebee" containerID="d907208d7007af4b79e9b01ccc5729c741a5666c04f6970d4a19bfd260dabe90" exitCode=0 Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.926433 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" event={"ID":"03eb9af6-09e2-4d8b-ac64-c924049bebee","Type":"ContainerStarted","Data":"76a9d7532f58e36c8e27df8741b0a57a25277a216fe8b97219fd82372d984640"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.926511 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" event={"ID":"03eb9af6-09e2-4d8b-ac64-c924049bebee","Type":"ContainerDied","Data":"d907208d7007af4b79e9b01ccc5729c741a5666c04f6970d4a19bfd260dabe90"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.936574 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bm6q7" event={"ID":"fb367911-0f91-48b5-badb-1338ea2de5c1","Type":"ContainerStarted","Data":"01e7c25658e2ce0f7af95bf380ae1c52c185e1fe3cada52f277ed0900cc25ce6"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.944855 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrs2w" event={"ID":"14a8ee7a-0483-4a58-a9e4-7b26248e998b","Type":"ContainerStarted","Data":"e24403aaf6da11b2b2d08efaf36a3d32d4e5774b090024a538c1429e01617375"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.944974 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z6f26" podStartSLOduration=106.944957384 podStartE2EDuration="1m46.944957384s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:02.944701877 +0000 UTC m=+176.365075380" watchObservedRunningTime="2026-02-23 10:09:02.944957384 +0000 UTC m=+176.365330887" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.952569 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-dwm5z" event={"ID":"d2c1c227-0297-4ba0-9acb-4690cffd0554","Type":"ContainerStarted","Data":"cfd0e62e772ceb74e5ce46e5b9578441a9e996885a7994906337a9dcac3275fc"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.961155 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.961431 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-879f6c89f-mz6cv_bcf306b5-f8ab-4774-8163-c2a2b47f1940/controller-manager/0.log" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.961500 4904 generic.go:334] "Generic (PLEG): container finished" podID="bcf306b5-f8ab-4774-8163-c2a2b47f1940" containerID="08b0e5b8f1f49bf8691c5b9e07b6199cda4f8743a0e59b2805f3fa47e0738399" exitCode=2 Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.961583 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" event={"ID":"bcf306b5-f8ab-4774-8163-c2a2b47f1940","Type":"ContainerDied","Data":"08b0e5b8f1f49bf8691c5b9e07b6199cda4f8743a0e59b2805f3fa47e0738399"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.961637 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" event={"ID":"bcf306b5-f8ab-4774-8163-c2a2b47f1940","Type":"ContainerDied","Data":"7f077a167b3efa705f2358b3e45a6a7673dd3336905acd297d86d6f529daf141"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.961660 4904 scope.go:117] "RemoveContainer" containerID="08b0e5b8f1f49bf8691c5b9e07b6199cda4f8743a0e59b2805f3fa47e0738399" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.961839 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mz6cv" Feb 23 10:09:02 crc kubenswrapper[4904]: E0223 10:09:02.962610 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:03.462583551 +0000 UTC m=+176.882957064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.966582 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vfhxr" podStartSLOduration=106.966571528 podStartE2EDuration="1m46.966571528s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:02.965420024 +0000 UTC m=+176.385793537" watchObservedRunningTime="2026-02-23 10:09:02.966571528 +0000 UTC m=+176.386945031" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.969349 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dqz6v" event={"ID":"6cb983a4-cc92-48b0-b722-920f10ca60f1","Type":"ContainerStarted","Data":"5ffba8870e61c9f9f2ecb659a5793ef52677b8273ff874933440438281c8e245"} Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.970302 4904 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5bfn9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.970342 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5bfn9" podUID="74d9be42-ba8c-426b-b4b0-bad0bc65648b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 23 10:09:02 crc kubenswrapper[4904]: I0223 10:09:02.971232 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" podUID="879b2a65-a224-4ac5-8d57-ea3b776d4a5c" containerName="route-controller-manager" containerID="cri-o://6078cf23087b75367db8e2483f10ee52e25b4d4059bc4437006b0179e791bed6" gracePeriod=30 Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.004174 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" podStartSLOduration=107.00415902 podStartE2EDuration="1m47.00415902s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:03.004101588 +0000 UTC m=+176.424475101" watchObservedRunningTime="2026-02-23 10:09:03.00415902 +0000 UTC m=+176.424532533" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.007633 4904 scope.go:117] "RemoveContainer" containerID="08b0e5b8f1f49bf8691c5b9e07b6199cda4f8743a0e59b2805f3fa47e0738399" Feb 23 10:09:03 crc kubenswrapper[4904]: E0223 10:09:03.008408 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08b0e5b8f1f49bf8691c5b9e07b6199cda4f8743a0e59b2805f3fa47e0738399\": container with ID starting with 08b0e5b8f1f49bf8691c5b9e07b6199cda4f8743a0e59b2805f3fa47e0738399 not found: ID does not exist" containerID="08b0e5b8f1f49bf8691c5b9e07b6199cda4f8743a0e59b2805f3fa47e0738399" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.008440 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08b0e5b8f1f49bf8691c5b9e07b6199cda4f8743a0e59b2805f3fa47e0738399"} err="failed to get container status \"08b0e5b8f1f49bf8691c5b9e07b6199cda4f8743a0e59b2805f3fa47e0738399\": rpc error: code = NotFound desc = could not find container \"08b0e5b8f1f49bf8691c5b9e07b6199cda4f8743a0e59b2805f3fa47e0738399\": container with ID starting with 08b0e5b8f1f49bf8691c5b9e07b6199cda4f8743a0e59b2805f3fa47e0738399 not found: ID does not exist" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.008503 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.008819 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.015196 4904 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-ws8fp container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.14:8443/livez\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.015249 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" podUID="03eb9af6-09e2-4d8b-ac64-c924049bebee" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.14:8443/livez\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.031921 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-5m7h4" podStartSLOduration=8.031902324 podStartE2EDuration="8.031902324s" podCreationTimestamp="2026-02-23 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:03.031605265 +0000 UTC m=+176.451978778" watchObservedRunningTime="2026-02-23 10:09:03.031902324 +0000 UTC m=+176.452275837" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.067929 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.077737 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-dwm5z" podStartSLOduration=108.077701926 podStartE2EDuration="1m48.077701926s" podCreationTimestamp="2026-02-23 10:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:03.073846543 +0000 UTC m=+176.494220066" watchObservedRunningTime="2026-02-23 10:09:03.077701926 +0000 UTC m=+176.498075439" Feb 23 10:09:03 crc kubenswrapper[4904]: E0223 10:09:03.111555 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:03.611538519 +0000 UTC m=+177.031912032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.112948 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.144994 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8mh2" podStartSLOduration=107.144975469 podStartE2EDuration="1m47.144975469s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:03.144912537 +0000 UTC m=+176.565286050" watchObservedRunningTime="2026-02-23 10:09:03.144975469 +0000 UTC m=+176.565348982" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.167486 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrs2w" podStartSLOduration=107.167470178 podStartE2EDuration="1m47.167470178s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:03.166359666 +0000 UTC m=+176.586733179" watchObservedRunningTime="2026-02-23 10:09:03.167470178 +0000 UTC m=+176.587843691" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.175777 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:03 crc kubenswrapper[4904]: E0223 10:09:03.176079 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:03.676066361 +0000 UTC m=+177.096439874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.199697 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-bm6q7" podStartSLOduration=107.199680973 podStartE2EDuration="1m47.199680973s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:03.199076615 +0000 UTC m=+176.619450128" watchObservedRunningTime="2026-02-23 10:09:03.199680973 +0000 UTC m=+176.620054486" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.267420 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dqz6v" podStartSLOduration=107.267402848 podStartE2EDuration="1m47.267402848s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:03.231813975 +0000 UTC m=+176.652187478" watchObservedRunningTime="2026-02-23 10:09:03.267402848 +0000 UTC m=+176.687776361" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.282725 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:03 crc kubenswrapper[4904]: E0223 10:09:03.283048 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:03.783036247 +0000 UTC m=+177.203409750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.286769 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mz6cv"] Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.286801 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mz6cv"] Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.304101 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-ls9fb" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.383336 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:03 crc kubenswrapper[4904]: E0223 10:09:03.383654 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:03.883632386 +0000 UTC m=+177.304005899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.386185 4904 patch_prober.go:28] interesting pod/router-default-5444994796-kg9d6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 10:09:03 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Feb 23 10:09:03 crc kubenswrapper[4904]: [+]process-running ok Feb 23 10:09:03 crc kubenswrapper[4904]: healthz check failed Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.386236 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kg9d6" podUID="abb69e50-b84d-4499-a703-72e00ef6ff2a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.458571 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.459362 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.482466 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.484768 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:03 crc kubenswrapper[4904]: E0223 10:09:03.485117 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:03.985099691 +0000 UTC m=+177.405473204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.504801 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.569578 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.587154 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.587376 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3a42864-3170-463d-9d45-92324932b171-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c3a42864-3170-463d-9d45-92324932b171\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.587458 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3a42864-3170-463d-9d45-92324932b171-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c3a42864-3170-463d-9d45-92324932b171\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 10:09:03 crc kubenswrapper[4904]: E0223 10:09:03.587628 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:04.087611537 +0000 UTC m=+177.507985050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.590273 4904 patch_prober.go:28] interesting pod/apiserver-76f77b778f-scdtm container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 23 10:09:03 crc kubenswrapper[4904]: [+]log ok Feb 23 10:09:03 crc kubenswrapper[4904]: [+]etcd ok Feb 23 10:09:03 crc kubenswrapper[4904]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 23 10:09:03 crc kubenswrapper[4904]: [+]poststarthook/generic-apiserver-start-informers ok Feb 23 10:09:03 crc kubenswrapper[4904]: [+]poststarthook/max-in-flight-filter ok Feb 23 10:09:03 crc kubenswrapper[4904]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 23 10:09:03 crc kubenswrapper[4904]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 23 10:09:03 crc kubenswrapper[4904]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 23 10:09:03 crc kubenswrapper[4904]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 23 10:09:03 crc kubenswrapper[4904]: [+]poststarthook/project.openshift.io-projectcache ok Feb 23 10:09:03 crc kubenswrapper[4904]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 23 10:09:03 crc kubenswrapper[4904]: [+]poststarthook/openshift.io-startinformers ok Feb 23 10:09:03 crc kubenswrapper[4904]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 23 10:09:03 crc kubenswrapper[4904]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 23 10:09:03 crc kubenswrapper[4904]: livez check failed Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.590335 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-scdtm" podUID="f76de875-328c-4a57-beac-43e3b38ed141" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.689524 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3a42864-3170-463d-9d45-92324932b171-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c3a42864-3170-463d-9d45-92324932b171\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.689806 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.689807 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3a42864-3170-463d-9d45-92324932b171-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c3a42864-3170-463d-9d45-92324932b171\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.689837 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3a42864-3170-463d-9d45-92324932b171-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c3a42864-3170-463d-9d45-92324932b171\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 10:09:03 crc kubenswrapper[4904]: E0223 10:09:03.690307 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:04.190295478 +0000 UTC m=+177.610668991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.703556 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.722041 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3a42864-3170-463d-9d45-92324932b171-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c3a42864-3170-463d-9d45-92324932b171\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.784332 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.792587 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:03 crc kubenswrapper[4904]: E0223 10:09:03.793074 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:04.293056421 +0000 UTC m=+177.713429934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.893913 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.894220 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ad99ed9-56d8-464c-94ce-e861240dd0a5-metrics-certs\") pod \"network-metrics-daemon-rmw4r\" (UID: \"3ad99ed9-56d8-464c-94ce-e861240dd0a5\") " pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:09:03 crc kubenswrapper[4904]: E0223 10:09:03.897079 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:04.3970603 +0000 UTC m=+177.817433893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.901158 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ad99ed9-56d8-464c-94ce-e861240dd0a5-metrics-certs\") pod \"network-metrics-daemon-rmw4r\" (UID: \"3ad99ed9-56d8-464c-94ce-e861240dd0a5\") " pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.923579 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.987563 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-54qmm" event={"ID":"bbb0c3de-7fa6-4cbe-a56c-2fc7b9ccaeaa","Type":"ContainerStarted","Data":"53c0af3ec9431e6e3e7aa7bbca96e8cb204fb30763346083f1fba9bb25bb7f2a"} Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.991448 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rmw4r" Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.995141 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:03 crc kubenswrapper[4904]: E0223 10:09:03.995284 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:04.49526205 +0000 UTC m=+177.915635563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:03 crc kubenswrapper[4904]: I0223 10:09:03.995373 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:03 crc kubenswrapper[4904]: E0223 10:09:03.995707 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:04.495694772 +0000 UTC m=+177.916068285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.001122 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-npj6g" event={"ID":"881a7096-caa3-4abc-8a27-f9544300a130","Type":"ContainerStarted","Data":"146649f0c65346d60cf87944e3cd9fae6caaad591e4a478b7e251b679f0d1b4c"} Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.022309 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-54qmm" podStartSLOduration=108.022279102 podStartE2EDuration="1m48.022279102s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:04.016383939 +0000 UTC m=+177.436757452" watchObservedRunningTime="2026-02-23 10:09:04.022279102 +0000 UTC m=+177.442652635" Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.023844 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cb2c7" event={"ID":"c8c284af-08a1-403a-9f3e-f7ed477a1bf9","Type":"ContainerStarted","Data":"685fbfd582ce69cd7b8aa9e1622eb6d728564c6a10b19c00e670c67e4ad5cf56"} Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.060915 4904 generic.go:334] "Generic (PLEG): container finished" podID="879b2a65-a224-4ac5-8d57-ea3b776d4a5c" containerID="6078cf23087b75367db8e2483f10ee52e25b4d4059bc4437006b0179e791bed6" exitCode=0 Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.061622 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.061924 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" event={"ID":"879b2a65-a224-4ac5-8d57-ea3b776d4a5c","Type":"ContainerDied","Data":"6078cf23087b75367db8e2483f10ee52e25b4d4059bc4437006b0179e791bed6"} Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.061948 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w" event={"ID":"879b2a65-a224-4ac5-8d57-ea3b776d4a5c","Type":"ContainerDied","Data":"098772ec483d3453597b881def343ceedb43e2bac78444b69d42f37dae388720"} Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.061963 4904 scope.go:117] "RemoveContainer" containerID="6078cf23087b75367db8e2483f10ee52e25b4d4059bc4437006b0179e791bed6" Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.073473 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d97bd54d7-9pds6"] Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.088868 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-cb2c7" podStartSLOduration=108.07370626 podStartE2EDuration="1m48.07370626s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:04.060797911 +0000 UTC m=+177.481171444" watchObservedRunningTime="2026-02-23 10:09:04.07370626 +0000 UTC m=+177.494079783" Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.094523 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ctv9r" Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.099509 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879b2a65-a224-4ac5-8d57-ea3b776d4a5c-config\") pod \"879b2a65-a224-4ac5-8d57-ea3b776d4a5c\" (UID: \"879b2a65-a224-4ac5-8d57-ea3b776d4a5c\") " Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.099567 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwqgr\" (UniqueName: \"kubernetes.io/projected/879b2a65-a224-4ac5-8d57-ea3b776d4a5c-kube-api-access-pwqgr\") pod \"879b2a65-a224-4ac5-8d57-ea3b776d4a5c\" (UID: \"879b2a65-a224-4ac5-8d57-ea3b776d4a5c\") " Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.099638 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/879b2a65-a224-4ac5-8d57-ea3b776d4a5c-client-ca\") pod \"879b2a65-a224-4ac5-8d57-ea3b776d4a5c\" (UID: \"879b2a65-a224-4ac5-8d57-ea3b776d4a5c\") " Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.099743 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.099787 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/879b2a65-a224-4ac5-8d57-ea3b776d4a5c-serving-cert\") pod \"879b2a65-a224-4ac5-8d57-ea3b776d4a5c\" (UID: \"879b2a65-a224-4ac5-8d57-ea3b776d4a5c\") " Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.103288 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/879b2a65-a224-4ac5-8d57-ea3b776d4a5c-config" (OuterVolumeSpecName: "config") pod "879b2a65-a224-4ac5-8d57-ea3b776d4a5c" (UID: "879b2a65-a224-4ac5-8d57-ea3b776d4a5c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:09:04 crc kubenswrapper[4904]: E0223 10:09:04.103662 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:04.603640657 +0000 UTC m=+178.024014240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.104086 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/879b2a65-a224-4ac5-8d57-ea3b776d4a5c-client-ca" (OuterVolumeSpecName: "client-ca") pod "879b2a65-a224-4ac5-8d57-ea3b776d4a5c" (UID: "879b2a65-a224-4ac5-8d57-ea3b776d4a5c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.117741 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/879b2a65-a224-4ac5-8d57-ea3b776d4a5c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "879b2a65-a224-4ac5-8d57-ea3b776d4a5c" (UID: "879b2a65-a224-4ac5-8d57-ea3b776d4a5c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:09:04 crc kubenswrapper[4904]: W0223 10:09:04.121328 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1956dc12_a0b1_4439_b13a_3ffc15700f02.slice/crio-29eaffe6cc100266accea92dd454e1e55b1522987b2a80faef5fd4c5c224141e WatchSource:0}: Error finding container 29eaffe6cc100266accea92dd454e1e55b1522987b2a80faef5fd4c5c224141e: Status 404 returned error can't find the container with id 29eaffe6cc100266accea92dd454e1e55b1522987b2a80faef5fd4c5c224141e Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.124315 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/879b2a65-a224-4ac5-8d57-ea3b776d4a5c-kube-api-access-pwqgr" (OuterVolumeSpecName: "kube-api-access-pwqgr") pod "879b2a65-a224-4ac5-8d57-ea3b776d4a5c" (UID: "879b2a65-a224-4ac5-8d57-ea3b776d4a5c"). InnerVolumeSpecName "kube-api-access-pwqgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.121462 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-48sfc" Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.125583 4904 scope.go:117] "RemoveContainer" containerID="6078cf23087b75367db8e2483f10ee52e25b4d4059bc4437006b0179e791bed6" Feb 23 10:09:04 crc kubenswrapper[4904]: E0223 10:09:04.150789 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6078cf23087b75367db8e2483f10ee52e25b4d4059bc4437006b0179e791bed6\": container with ID starting with 6078cf23087b75367db8e2483f10ee52e25b4d4059bc4437006b0179e791bed6 not found: ID does not exist" containerID="6078cf23087b75367db8e2483f10ee52e25b4d4059bc4437006b0179e791bed6" Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.150840 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6078cf23087b75367db8e2483f10ee52e25b4d4059bc4437006b0179e791bed6"} err="failed to get container status \"6078cf23087b75367db8e2483f10ee52e25b4d4059bc4437006b0179e791bed6\": rpc error: code = NotFound desc = could not find container \"6078cf23087b75367db8e2483f10ee52e25b4d4059bc4437006b0179e791bed6\": container with ID starting with 6078cf23087b75367db8e2483f10ee52e25b4d4059bc4437006b0179e791bed6 not found: ID does not exist" Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.204650 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.205622 4904 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/879b2a65-a224-4ac5-8d57-ea3b776d4a5c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.205643 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/879b2a65-a224-4ac5-8d57-ea3b776d4a5c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.205653 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879b2a65-a224-4ac5-8d57-ea3b776d4a5c-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.205663 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwqgr\" (UniqueName: \"kubernetes.io/projected/879b2a65-a224-4ac5-8d57-ea3b776d4a5c-kube-api-access-pwqgr\") on node \"crc\" DevicePath \"\"" Feb 23 10:09:04 crc kubenswrapper[4904]: E0223 10:09:04.210429 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:04.709206923 +0000 UTC m=+178.129580436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.307220 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:04 crc kubenswrapper[4904]: E0223 10:09:04.307723 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:04.807696291 +0000 UTC m=+178.228069804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.404656 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w"] Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.408880 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:04 crc kubenswrapper[4904]: E0223 10:09:04.409266 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:04.909252088 +0000 UTC m=+178.329625601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.411128 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4j2w"] Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.420279 4904 patch_prober.go:28] interesting pod/router-default-5444994796-kg9d6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 10:09:04 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Feb 23 10:09:04 crc kubenswrapper[4904]: [+]process-running ok Feb 23 10:09:04 crc kubenswrapper[4904]: healthz check failed Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.420339 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kg9d6" podUID="abb69e50-b84d-4499-a703-72e00ef6ff2a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.486778 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-z6f26" Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.497244 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.510236 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:04 crc kubenswrapper[4904]: E0223 10:09:04.510540 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:05.010524708 +0000 UTC m=+178.430898221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.611526 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:04 crc kubenswrapper[4904]: E0223 10:09:04.611831 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:05.111817718 +0000 UTC m=+178.532191231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.712786 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:04 crc kubenswrapper[4904]: E0223 10:09:04.713394 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:05.213337504 +0000 UTC m=+178.633711017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.791214 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rmw4r"] Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.814380 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:04 crc kubenswrapper[4904]: E0223 10:09:04.814685 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:05.314664285 +0000 UTC m=+178.735037798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:04 crc kubenswrapper[4904]: I0223 10:09:04.915160 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:04 crc kubenswrapper[4904]: E0223 10:09:04.915684 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:05.415669147 +0000 UTC m=+178.836042660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.011961 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl"] Feb 23 10:09:05 crc kubenswrapper[4904]: E0223 10:09:05.012145 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879b2a65-a224-4ac5-8d57-ea3b776d4a5c" containerName="route-controller-manager" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.012156 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="879b2a65-a224-4ac5-8d57-ea3b776d4a5c" containerName="route-controller-manager" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.012256 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="879b2a65-a224-4ac5-8d57-ea3b776d4a5c" containerName="route-controller-manager" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.012574 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.017560 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:05 crc kubenswrapper[4904]: E0223 10:09:05.017900 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:05.517886924 +0000 UTC m=+178.938260437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.018411 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.018605 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.018730 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.018874 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.019011 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.019511 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.035028 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl"] Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.090544 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" event={"ID":"1956dc12-a0b1-4439-b13a-3ffc15700f02","Type":"ContainerStarted","Data":"f9a921517c4ab0664803f00545cb8a93b3225f63ff3ed3a8892d3b8ac52d04f4"} Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.091063 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.091131 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" event={"ID":"1956dc12-a0b1-4439-b13a-3ffc15700f02","Type":"ContainerStarted","Data":"29eaffe6cc100266accea92dd454e1e55b1522987b2a80faef5fd4c5c224141e"} Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.091909 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c3a42864-3170-463d-9d45-92324932b171","Type":"ContainerStarted","Data":"b6392dced774a6bd7c510a4939dd65f9a4aa471e2a5ee940f1b10e038bd0c190"} Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.091932 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c3a42864-3170-463d-9d45-92324932b171","Type":"ContainerStarted","Data":"6552f24853b1ad145e872617aadaddbbf2efc766be99a768cbc771b6913d4967"} Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.107344 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" podStartSLOduration=5.107326886 podStartE2EDuration="5.107326886s" podCreationTimestamp="2026-02-23 10:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:05.106689498 +0000 UTC m=+178.527063011" watchObservedRunningTime="2026-02-23 10:09:05.107326886 +0000 UTC m=+178.527700399" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.110131 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.121595 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:05 crc kubenswrapper[4904]: E0223 10:09:05.121706 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:05.621689027 +0000 UTC m=+179.042062540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.122183 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cdd72812-1cf3-4262-9523-0a4e8402cae2-client-ca\") pod \"route-controller-manager-768f9c589f-qlqkl\" (UID: \"cdd72812-1cf3-4262-9523-0a4e8402cae2\") " pod="openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.122300 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfcjz\" (UniqueName: \"kubernetes.io/projected/cdd72812-1cf3-4262-9523-0a4e8402cae2-kube-api-access-pfcjz\") pod \"route-controller-manager-768f9c589f-qlqkl\" (UID: \"cdd72812-1cf3-4262-9523-0a4e8402cae2\") " pod="openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.123365 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.123499 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd72812-1cf3-4262-9523-0a4e8402cae2-config\") pod \"route-controller-manager-768f9c589f-qlqkl\" (UID: \"cdd72812-1cf3-4262-9523-0a4e8402cae2\") " pod="openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.123596 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdd72812-1cf3-4262-9523-0a4e8402cae2-serving-cert\") pod \"route-controller-manager-768f9c589f-qlqkl\" (UID: \"cdd72812-1cf3-4262-9523-0a4e8402cae2\") " pod="openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl" Feb 23 10:09:05 crc kubenswrapper[4904]: E0223 10:09:05.124034 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:05.624022116 +0000 UTC m=+179.044395629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.127761 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.127746155 podStartE2EDuration="2.127746155s" podCreationTimestamp="2026-02-23 10:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:05.127288152 +0000 UTC m=+178.547661665" watchObservedRunningTime="2026-02-23 10:09:05.127746155 +0000 UTC m=+178.548119668" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.133619 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-npj6g" event={"ID":"881a7096-caa3-4abc-8a27-f9544300a130","Type":"ContainerStarted","Data":"40e2c95252ed934dd7875622890f05f77f5cc58fa6f6bca02bcdb299e409572e"} Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.150619 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rmw4r" event={"ID":"3ad99ed9-56d8-464c-94ce-e861240dd0a5","Type":"ContainerStarted","Data":"b62f0389f02d789f5b11e612fba16c138f0ff202ea6bc103ea87390fc3f7137b"} Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.224434 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.224908 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfcjz\" (UniqueName: \"kubernetes.io/projected/cdd72812-1cf3-4262-9523-0a4e8402cae2-kube-api-access-pfcjz\") pod \"route-controller-manager-768f9c589f-qlqkl\" (UID: \"cdd72812-1cf3-4262-9523-0a4e8402cae2\") " pod="openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.225084 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd72812-1cf3-4262-9523-0a4e8402cae2-config\") pod \"route-controller-manager-768f9c589f-qlqkl\" (UID: \"cdd72812-1cf3-4262-9523-0a4e8402cae2\") " pod="openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.225161 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdd72812-1cf3-4262-9523-0a4e8402cae2-serving-cert\") pod \"route-controller-manager-768f9c589f-qlqkl\" (UID: \"cdd72812-1cf3-4262-9523-0a4e8402cae2\") " pod="openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.225290 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cdd72812-1cf3-4262-9523-0a4e8402cae2-client-ca\") pod \"route-controller-manager-768f9c589f-qlqkl\" (UID: \"cdd72812-1cf3-4262-9523-0a4e8402cae2\") " pod="openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.226165 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cdd72812-1cf3-4262-9523-0a4e8402cae2-client-ca\") pod \"route-controller-manager-768f9c589f-qlqkl\" (UID: \"cdd72812-1cf3-4262-9523-0a4e8402cae2\") " pod="openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl" Feb 23 10:09:05 crc kubenswrapper[4904]: E0223 10:09:05.226309 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:05.726296195 +0000 UTC m=+179.146669708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.227705 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd72812-1cf3-4262-9523-0a4e8402cae2-config\") pod \"route-controller-manager-768f9c589f-qlqkl\" (UID: \"cdd72812-1cf3-4262-9523-0a4e8402cae2\") " pod="openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.238638 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdd72812-1cf3-4262-9523-0a4e8402cae2-serving-cert\") pod \"route-controller-manager-768f9c589f-qlqkl\" (UID: \"cdd72812-1cf3-4262-9523-0a4e8402cae2\") " pod="openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.312582 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfcjz\" (UniqueName: \"kubernetes.io/projected/cdd72812-1cf3-4262-9523-0a4e8402cae2-kube-api-access-pfcjz\") pod \"route-controller-manager-768f9c589f-qlqkl\" (UID: \"cdd72812-1cf3-4262-9523-0a4e8402cae2\") " pod="openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.318032 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="879b2a65-a224-4ac5-8d57-ea3b776d4a5c" path="/var/lib/kubelet/pods/879b2a65-a224-4ac5-8d57-ea3b776d4a5c/volumes" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.318772 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcf306b5-f8ab-4774-8163-c2a2b47f1940" path="/var/lib/kubelet/pods/bcf306b5-f8ab-4774-8163-c2a2b47f1940/volumes" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.331030 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:05 crc kubenswrapper[4904]: E0223 10:09:05.331785 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:05.831773837 +0000 UTC m=+179.252147350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.350419 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.389905 4904 patch_prober.go:28] interesting pod/router-default-5444994796-kg9d6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 10:09:05 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Feb 23 10:09:05 crc kubenswrapper[4904]: [+]process-running ok Feb 23 10:09:05 crc kubenswrapper[4904]: healthz check failed Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.389958 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kg9d6" podUID="abb69e50-b84d-4499-a703-72e00ef6ff2a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.446914 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:05 crc kubenswrapper[4904]: E0223 10:09:05.447630 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:05.947609844 +0000 UTC m=+179.367983357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.549512 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:05 crc kubenswrapper[4904]: E0223 10:09:05.549859 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:06.049842361 +0000 UTC m=+179.470215874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.650452 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:05 crc kubenswrapper[4904]: E0223 10:09:05.650605 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:06.150572805 +0000 UTC m=+179.570946318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.650910 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:05 crc kubenswrapper[4904]: E0223 10:09:05.651341 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:06.151325017 +0000 UTC m=+179.571698530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.675022 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8f9wz"] Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.676405 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8f9wz" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.693008 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.694054 4904 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.697857 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8f9wz"] Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.707793 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m84cg" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.752663 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:05 crc kubenswrapper[4904]: E0223 10:09:05.752996 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:06.252979566 +0000 UTC m=+179.673353079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.818994 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.819629 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.821797 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.825749 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.844111 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.854542 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82c52\" (UniqueName: \"kubernetes.io/projected/e82107db-3789-4ed9-8b7a-7dc968cb833f-kube-api-access-82c52\") pod \"certified-operators-8f9wz\" (UID: \"e82107db-3789-4ed9-8b7a-7dc968cb833f\") " pod="openshift-marketplace/certified-operators-8f9wz" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.854605 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e82107db-3789-4ed9-8b7a-7dc968cb833f-catalog-content\") pod \"certified-operators-8f9wz\" (UID: \"e82107db-3789-4ed9-8b7a-7dc968cb833f\") " pod="openshift-marketplace/certified-operators-8f9wz" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.854652 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e82107db-3789-4ed9-8b7a-7dc968cb833f-utilities\") pod \"certified-operators-8f9wz\" (UID: \"e82107db-3789-4ed9-8b7a-7dc968cb833f\") " pod="openshift-marketplace/certified-operators-8f9wz" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.854689 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:05 crc kubenswrapper[4904]: E0223 10:09:05.854952 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:06.354940756 +0000 UTC m=+179.775314269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.874818 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cf449"] Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.875964 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cf449" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.891694 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.893598 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cf449"] Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.956363 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.957105 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82c52\" (UniqueName: \"kubernetes.io/projected/e82107db-3789-4ed9-8b7a-7dc968cb833f-kube-api-access-82c52\") pod \"certified-operators-8f9wz\" (UID: \"e82107db-3789-4ed9-8b7a-7dc968cb833f\") " pod="openshift-marketplace/certified-operators-8f9wz" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.957276 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/140ecd01-e571-4e02-8e3e-1c99b5d3ad42-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"140ecd01-e571-4e02-8e3e-1c99b5d3ad42\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.957388 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e82107db-3789-4ed9-8b7a-7dc968cb833f-catalog-content\") pod \"certified-operators-8f9wz\" (UID: \"e82107db-3789-4ed9-8b7a-7dc968cb833f\") " pod="openshift-marketplace/certified-operators-8f9wz" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.957511 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/140ecd01-e571-4e02-8e3e-1c99b5d3ad42-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"140ecd01-e571-4e02-8e3e-1c99b5d3ad42\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.957621 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e82107db-3789-4ed9-8b7a-7dc968cb833f-utilities\") pod \"certified-operators-8f9wz\" (UID: \"e82107db-3789-4ed9-8b7a-7dc968cb833f\") " pod="openshift-marketplace/certified-operators-8f9wz" Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.958195 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e82107db-3789-4ed9-8b7a-7dc968cb833f-utilities\") pod \"certified-operators-8f9wz\" (UID: \"e82107db-3789-4ed9-8b7a-7dc968cb833f\") " pod="openshift-marketplace/certified-operators-8f9wz" Feb 23 10:09:05 crc kubenswrapper[4904]: E0223 10:09:05.958792 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:06.45877432 +0000 UTC m=+179.879147843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:05 crc kubenswrapper[4904]: I0223 10:09:05.959420 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e82107db-3789-4ed9-8b7a-7dc968cb833f-catalog-content\") pod \"certified-operators-8f9wz\" (UID: \"e82107db-3789-4ed9-8b7a-7dc968cb833f\") " pod="openshift-marketplace/certified-operators-8f9wz" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.002313 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82c52\" (UniqueName: \"kubernetes.io/projected/e82107db-3789-4ed9-8b7a-7dc968cb833f-kube-api-access-82c52\") pod \"certified-operators-8f9wz\" (UID: \"e82107db-3789-4ed9-8b7a-7dc968cb833f\") " pod="openshift-marketplace/certified-operators-8f9wz" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.006606 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl"] Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.050406 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tltlv"] Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.051564 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tltlv" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.052524 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8f9wz" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.058795 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/140ecd01-e571-4e02-8e3e-1c99b5d3ad42-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"140ecd01-e571-4e02-8e3e-1c99b5d3ad42\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.058924 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f64dk\" (UniqueName: \"kubernetes.io/projected/2ae8aff6-7d01-4352-b974-418342d434b3-kube-api-access-f64dk\") pod \"community-operators-cf449\" (UID: \"2ae8aff6-7d01-4352-b974-418342d434b3\") " pod="openshift-marketplace/community-operators-cf449" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.059080 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ae8aff6-7d01-4352-b974-418342d434b3-utilities\") pod \"community-operators-cf449\" (UID: \"2ae8aff6-7d01-4352-b974-418342d434b3\") " pod="openshift-marketplace/community-operators-cf449" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.059180 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/140ecd01-e571-4e02-8e3e-1c99b5d3ad42-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"140ecd01-e571-4e02-8e3e-1c99b5d3ad42\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.059330 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.059512 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/140ecd01-e571-4e02-8e3e-1c99b5d3ad42-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"140ecd01-e571-4e02-8e3e-1c99b5d3ad42\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.059548 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ae8aff6-7d01-4352-b974-418342d434b3-catalog-content\") pod \"community-operators-cf449\" (UID: \"2ae8aff6-7d01-4352-b974-418342d434b3\") " pod="openshift-marketplace/community-operators-cf449" Feb 23 10:09:06 crc kubenswrapper[4904]: E0223 10:09:06.059729 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:06.55970108 +0000 UTC m=+179.980074583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.087617 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tltlv"] Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.116140 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/140ecd01-e571-4e02-8e3e-1c99b5d3ad42-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"140ecd01-e571-4e02-8e3e-1c99b5d3ad42\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.164096 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.164270 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ae8aff6-7d01-4352-b974-418342d434b3-catalog-content\") pod \"community-operators-cf449\" (UID: \"2ae8aff6-7d01-4352-b974-418342d434b3\") " pod="openshift-marketplace/community-operators-cf449" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.164298 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a-catalog-content\") pod \"certified-operators-tltlv\" (UID: \"aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a\") " pod="openshift-marketplace/certified-operators-tltlv" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.164318 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a-utilities\") pod \"certified-operators-tltlv\" (UID: \"aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a\") " pod="openshift-marketplace/certified-operators-tltlv" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.164360 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f64dk\" (UniqueName: \"kubernetes.io/projected/2ae8aff6-7d01-4352-b974-418342d434b3-kube-api-access-f64dk\") pod \"community-operators-cf449\" (UID: \"2ae8aff6-7d01-4352-b974-418342d434b3\") " pod="openshift-marketplace/community-operators-cf449" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.164375 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ae8aff6-7d01-4352-b974-418342d434b3-utilities\") pod \"community-operators-cf449\" (UID: \"2ae8aff6-7d01-4352-b974-418342d434b3\") " pod="openshift-marketplace/community-operators-cf449" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.164418 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jzkf\" (UniqueName: \"kubernetes.io/projected/aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a-kube-api-access-5jzkf\") pod \"certified-operators-tltlv\" (UID: \"aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a\") " pod="openshift-marketplace/certified-operators-tltlv" Feb 23 10:09:06 crc kubenswrapper[4904]: E0223 10:09:06.164546 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:06.664533343 +0000 UTC m=+180.084906856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.164903 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ae8aff6-7d01-4352-b974-418342d434b3-catalog-content\") pod \"community-operators-cf449\" (UID: \"2ae8aff6-7d01-4352-b974-418342d434b3\") " pod="openshift-marketplace/community-operators-cf449" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.165349 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ae8aff6-7d01-4352-b974-418342d434b3-utilities\") pod \"community-operators-cf449\" (UID: \"2ae8aff6-7d01-4352-b974-418342d434b3\") " pod="openshift-marketplace/community-operators-cf449" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.170975 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.190220 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f64dk\" (UniqueName: \"kubernetes.io/projected/2ae8aff6-7d01-4352-b974-418342d434b3-kube-api-access-f64dk\") pod \"community-operators-cf449\" (UID: \"2ae8aff6-7d01-4352-b974-418342d434b3\") " pod="openshift-marketplace/community-operators-cf449" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.229401 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cf449" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.230727 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-npj6g" event={"ID":"881a7096-caa3-4abc-8a27-f9544300a130","Type":"ContainerStarted","Data":"9a9d102b7d165673cab6d0df6f964bbbb1095a04092b998b4f8ae3ac63024bca"} Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.230763 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-npj6g" event={"ID":"881a7096-caa3-4abc-8a27-f9544300a130","Type":"ContainerStarted","Data":"7475d41d44bfb341183fc1afe32f57fdf43412c6eff8caee8c98b8336e9aee28"} Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.255914 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rmw4r" event={"ID":"3ad99ed9-56d8-464c-94ce-e861240dd0a5","Type":"ContainerStarted","Data":"e685e50ee2cf5c553783ae60028dba175c4cb13806c22a228cb45f4c2f00a466"} Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.255963 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rmw4r" event={"ID":"3ad99ed9-56d8-464c-94ce-e861240dd0a5","Type":"ContainerStarted","Data":"4cd74ab3df62d3dcc9a9348198e3fb90c4f7254c4f059b0a3dcabe8e4e43be90"} Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.266449 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a-catalog-content\") pod \"certified-operators-tltlv\" (UID: \"aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a\") " pod="openshift-marketplace/certified-operators-tltlv" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.266481 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a-utilities\") pod \"certified-operators-tltlv\" (UID: \"aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a\") " pod="openshift-marketplace/certified-operators-tltlv" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.266534 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.266561 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jzkf\" (UniqueName: \"kubernetes.io/projected/aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a-kube-api-access-5jzkf\") pod \"certified-operators-tltlv\" (UID: \"aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a\") " pod="openshift-marketplace/certified-operators-tltlv" Feb 23 10:09:06 crc kubenswrapper[4904]: E0223 10:09:06.267007 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:06.766996928 +0000 UTC m=+180.187370441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.267112 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a-catalog-content\") pod \"certified-operators-tltlv\" (UID: \"aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a\") " pod="openshift-marketplace/certified-operators-tltlv" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.269917 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x5sfd"] Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.271542 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl" event={"ID":"cdd72812-1cf3-4262-9523-0a4e8402cae2","Type":"ContainerStarted","Data":"7bd8b73fcd3d9d17a116e6ae65551e93c2b6f910772432c4cf2ac9b2a70aadc0"} Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.271682 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x5sfd" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.272357 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a-utilities\") pod \"certified-operators-tltlv\" (UID: \"aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a\") " pod="openshift-marketplace/certified-operators-tltlv" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.312102 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-npj6g" podStartSLOduration=11.31208502 podStartE2EDuration="11.31208502s" podCreationTimestamp="2026-02-23 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:06.308024851 +0000 UTC m=+179.728398364" watchObservedRunningTime="2026-02-23 10:09:06.31208502 +0000 UTC m=+179.732458533" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.313395 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x5sfd"] Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.319270 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jzkf\" (UniqueName: \"kubernetes.io/projected/aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a-kube-api-access-5jzkf\") pod \"certified-operators-tltlv\" (UID: \"aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a\") " pod="openshift-marketplace/certified-operators-tltlv" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.342304 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-rmw4r" podStartSLOduration=111.342286995 podStartE2EDuration="1m51.342286995s" podCreationTimestamp="2026-02-23 10:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:06.340136132 +0000 UTC m=+179.760509645" watchObservedRunningTime="2026-02-23 10:09:06.342286995 +0000 UTC m=+179.762660498" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.367380 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.367608 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9301188-d31f-45c6-a89a-7101ba4af296-utilities\") pod \"community-operators-x5sfd\" (UID: \"b9301188-d31f-45c6-a89a-7101ba4af296\") " pod="openshift-marketplace/community-operators-x5sfd" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.367709 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsd75\" (UniqueName: \"kubernetes.io/projected/b9301188-d31f-45c6-a89a-7101ba4af296-kube-api-access-fsd75\") pod \"community-operators-x5sfd\" (UID: \"b9301188-d31f-45c6-a89a-7101ba4af296\") " pod="openshift-marketplace/community-operators-x5sfd" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.367758 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9301188-d31f-45c6-a89a-7101ba4af296-catalog-content\") pod \"community-operators-x5sfd\" (UID: \"b9301188-d31f-45c6-a89a-7101ba4af296\") " pod="openshift-marketplace/community-operators-x5sfd" Feb 23 10:09:06 crc kubenswrapper[4904]: E0223 10:09:06.368148 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:06.868119953 +0000 UTC m=+180.288493466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.374721 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl" podStartSLOduration=6.374703306 podStartE2EDuration="6.374703306s" podCreationTimestamp="2026-02-23 10:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:06.374210051 +0000 UTC m=+179.794583584" watchObservedRunningTime="2026-02-23 10:09:06.374703306 +0000 UTC m=+179.795076819" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.387052 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tltlv" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.409039 4904 patch_prober.go:28] interesting pod/router-default-5444994796-kg9d6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 10:09:06 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Feb 23 10:09:06 crc kubenswrapper[4904]: [+]process-running ok Feb 23 10:09:06 crc kubenswrapper[4904]: healthz check failed Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.409358 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kg9d6" podUID="abb69e50-b84d-4499-a703-72e00ef6ff2a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.468655 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsd75\" (UniqueName: \"kubernetes.io/projected/b9301188-d31f-45c6-a89a-7101ba4af296-kube-api-access-fsd75\") pod \"community-operators-x5sfd\" (UID: \"b9301188-d31f-45c6-a89a-7101ba4af296\") " pod="openshift-marketplace/community-operators-x5sfd" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.468702 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9301188-d31f-45c6-a89a-7101ba4af296-catalog-content\") pod \"community-operators-x5sfd\" (UID: \"b9301188-d31f-45c6-a89a-7101ba4af296\") " pod="openshift-marketplace/community-operators-x5sfd" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.468780 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.468810 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9301188-d31f-45c6-a89a-7101ba4af296-utilities\") pod \"community-operators-x5sfd\" (UID: \"b9301188-d31f-45c6-a89a-7101ba4af296\") " pod="openshift-marketplace/community-operators-x5sfd" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.469210 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9301188-d31f-45c6-a89a-7101ba4af296-utilities\") pod \"community-operators-x5sfd\" (UID: \"b9301188-d31f-45c6-a89a-7101ba4af296\") " pod="openshift-marketplace/community-operators-x5sfd" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.469673 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9301188-d31f-45c6-a89a-7101ba4af296-catalog-content\") pod \"community-operators-x5sfd\" (UID: \"b9301188-d31f-45c6-a89a-7101ba4af296\") " pod="openshift-marketplace/community-operators-x5sfd" Feb 23 10:09:06 crc kubenswrapper[4904]: E0223 10:09:06.469973 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 10:09:06.969962739 +0000 UTC m=+180.390336252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkgqc" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.508271 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsd75\" (UniqueName: \"kubernetes.io/projected/b9301188-d31f-45c6-a89a-7101ba4af296-kube-api-access-fsd75\") pod \"community-operators-x5sfd\" (UID: \"b9301188-d31f-45c6-a89a-7101ba4af296\") " pod="openshift-marketplace/community-operators-x5sfd" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.575570 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:06 crc kubenswrapper[4904]: E0223 10:09:06.576034 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 10:09:07.076016068 +0000 UTC m=+180.496389581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.616956 4904 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-23T10:09:05.69407725Z","Handler":null,"Name":""} Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.629167 4904 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.629208 4904 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.635210 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.648769 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x5sfd" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.649015 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-scdtm" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.676815 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.681726 4904 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.681765 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.733189 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkgqc\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.751628 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8f9wz"] Feb 23 10:09:06 crc kubenswrapper[4904]: W0223 10:09:06.769282 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode82107db_3789_4ed9_8b7a_7dc968cb833f.slice/crio-9c7667ea1208b007d7a93fe02462ca88457f9be08406548a079dd2fcdacbefcb WatchSource:0}: Error finding container 9c7667ea1208b007d7a93fe02462ca88457f9be08406548a079dd2fcdacbefcb: Status 404 returned error can't find the container with id 9c7667ea1208b007d7a93fe02462ca88457f9be08406548a079dd2fcdacbefcb Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.784969 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.915336 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.942632 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cf449"] Feb 23 10:09:06 crc kubenswrapper[4904]: I0223 10:09:06.963514 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 23 10:09:06 crc kubenswrapper[4904]: W0223 10:09:06.964707 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ae8aff6_7d01_4352_b974_418342d434b3.slice/crio-fef1cfee109cffd295edff27934fe6ac133928ce91e82fd18ebd9f986879da45 WatchSource:0}: Error finding container fef1cfee109cffd295edff27934fe6ac133928ce91e82fd18ebd9f986879da45: Status 404 returned error can't find the container with id fef1cfee109cffd295edff27934fe6ac133928ce91e82fd18ebd9f986879da45 Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.040796 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.384890 4904 patch_prober.go:28] interesting pod/router-default-5444994796-kg9d6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 10:09:07 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Feb 23 10:09:07 crc kubenswrapper[4904]: [+]process-running ok Feb 23 10:09:07 crc kubenswrapper[4904]: healthz check failed Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.385393 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kg9d6" podUID="abb69e50-b84d-4499-a703-72e00ef6ff2a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.406560 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.407017 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x5sfd"] Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.407694 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cf449" event={"ID":"2ae8aff6-7d01-4352-b974-418342d434b3","Type":"ContainerStarted","Data":"bd52c846636280445bf4622aa3afd130953ce7fe82e423d222d6fe698ad00738"} Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.407740 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cf449" event={"ID":"2ae8aff6-7d01-4352-b974-418342d434b3","Type":"ContainerStarted","Data":"fef1cfee109cffd295edff27934fe6ac133928ce91e82fd18ebd9f986879da45"} Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.423072 4904 patch_prober.go:28] interesting pod/downloads-7954f5f757-jnvdm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.423314 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jnvdm" podUID="945901ad-f721-4897-bca6-16436563e92c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.423507 4904 patch_prober.go:28] interesting pod/downloads-7954f5f757-jnvdm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.423604 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jnvdm" podUID="945901ad-f721-4897-bca6-16436563e92c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.442697 4904 generic.go:334] "Generic (PLEG): container finished" podID="e82107db-3789-4ed9-8b7a-7dc968cb833f" containerID="200d26b535af51b11d23dba0f945b61f5c8b20b2c48daf3cb9d6721321a8f87e" exitCode=0 Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.442813 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8f9wz" event={"ID":"e82107db-3789-4ed9-8b7a-7dc968cb833f","Type":"ContainerDied","Data":"200d26b535af51b11d23dba0f945b61f5c8b20b2c48daf3cb9d6721321a8f87e"} Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.442843 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8f9wz" event={"ID":"e82107db-3789-4ed9-8b7a-7dc968cb833f","Type":"ContainerStarted","Data":"9c7667ea1208b007d7a93fe02462ca88457f9be08406548a079dd2fcdacbefcb"} Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.467331 4904 generic.go:334] "Generic (PLEG): container finished" podID="c3a42864-3170-463d-9d45-92324932b171" containerID="b6392dced774a6bd7c510a4939dd65f9a4aa471e2a5ee940f1b10e038bd0c190" exitCode=0 Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.467615 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c3a42864-3170-463d-9d45-92324932b171","Type":"ContainerDied","Data":"b6392dced774a6bd7c510a4939dd65f9a4aa471e2a5ee940f1b10e038bd0c190"} Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.480448 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.487832 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"140ecd01-e571-4e02-8e3e-1c99b5d3ad42","Type":"ContainerStarted","Data":"aea577df6d0785308828725707ce3bfd198bad153bf1bac86ee27ab869d82fbc"} Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.517817 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl" event={"ID":"cdd72812-1cf3-4262-9523-0a4e8402cae2","Type":"ContainerStarted","Data":"52681c24e625b0d212d2b345146fd578c974cf9865787f5e88ddffe1ed1da5ea"} Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.517855 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tltlv"] Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.518756 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl" Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.528343 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl" Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.778536 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fkgqc"] Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.860147 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6mlwp"] Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.862007 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mlwp" Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.867225 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.883752 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mlwp"] Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.982371 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzzt7\" (UniqueName: \"kubernetes.io/projected/46dff5e0-cb39-492e-95c6-33e67169ef87-kube-api-access-kzzt7\") pod \"redhat-marketplace-6mlwp\" (UID: \"46dff5e0-cb39-492e-95c6-33e67169ef87\") " pod="openshift-marketplace/redhat-marketplace-6mlwp" Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.982831 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46dff5e0-cb39-492e-95c6-33e67169ef87-utilities\") pod \"redhat-marketplace-6mlwp\" (UID: \"46dff5e0-cb39-492e-95c6-33e67169ef87\") " pod="openshift-marketplace/redhat-marketplace-6mlwp" Feb 23 10:09:07 crc kubenswrapper[4904]: I0223 10:09:07.983009 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46dff5e0-cb39-492e-95c6-33e67169ef87-catalog-content\") pod \"redhat-marketplace-6mlwp\" (UID: \"46dff5e0-cb39-492e-95c6-33e67169ef87\") " pod="openshift-marketplace/redhat-marketplace-6mlwp" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.026018 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.034724 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ws8fp" Feb 23 10:09:08 crc kubenswrapper[4904]: E0223 10:09:08.056442 4904 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaaf9e3f8_fbd9_46e9_8cb6_9137ddfd139a.slice/crio-conmon-2f0fe54f7bd4da4c315d970f21f4d028ad143bb10e0401f9fa82e0eb392d425e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaaf9e3f8_fbd9_46e9_8cb6_9137ddfd139a.slice/crio-2f0fe54f7bd4da4c315d970f21f4d028ad143bb10e0401f9fa82e0eb392d425e.scope\": RecentStats: unable to find data in memory cache]" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.084523 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46dff5e0-cb39-492e-95c6-33e67169ef87-utilities\") pod \"redhat-marketplace-6mlwp\" (UID: \"46dff5e0-cb39-492e-95c6-33e67169ef87\") " pod="openshift-marketplace/redhat-marketplace-6mlwp" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.084596 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46dff5e0-cb39-492e-95c6-33e67169ef87-catalog-content\") pod \"redhat-marketplace-6mlwp\" (UID: \"46dff5e0-cb39-492e-95c6-33e67169ef87\") " pod="openshift-marketplace/redhat-marketplace-6mlwp" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.084666 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzzt7\" (UniqueName: \"kubernetes.io/projected/46dff5e0-cb39-492e-95c6-33e67169ef87-kube-api-access-kzzt7\") pod \"redhat-marketplace-6mlwp\" (UID: \"46dff5e0-cb39-492e-95c6-33e67169ef87\") " pod="openshift-marketplace/redhat-marketplace-6mlwp" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.086467 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46dff5e0-cb39-492e-95c6-33e67169ef87-utilities\") pod \"redhat-marketplace-6mlwp\" (UID: \"46dff5e0-cb39-492e-95c6-33e67169ef87\") " pod="openshift-marketplace/redhat-marketplace-6mlwp" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.086698 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46dff5e0-cb39-492e-95c6-33e67169ef87-catalog-content\") pod \"redhat-marketplace-6mlwp\" (UID: \"46dff5e0-cb39-492e-95c6-33e67169ef87\") " pod="openshift-marketplace/redhat-marketplace-6mlwp" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.112809 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzzt7\" (UniqueName: \"kubernetes.io/projected/46dff5e0-cb39-492e-95c6-33e67169ef87-kube-api-access-kzzt7\") pod \"redhat-marketplace-6mlwp\" (UID: \"46dff5e0-cb39-492e-95c6-33e67169ef87\") " pod="openshift-marketplace/redhat-marketplace-6mlwp" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.160017 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5bfn9" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.251901 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-76wlb"] Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.252875 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-76wlb" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.278058 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mlwp" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.278436 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-76wlb"] Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.286078 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0bfc05c-680d-4e43-96e2-eed1840b26ac-utilities\") pod \"redhat-marketplace-76wlb\" (UID: \"a0bfc05c-680d-4e43-96e2-eed1840b26ac\") " pod="openshift-marketplace/redhat-marketplace-76wlb" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.286167 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdgdq\" (UniqueName: \"kubernetes.io/projected/a0bfc05c-680d-4e43-96e2-eed1840b26ac-kube-api-access-zdgdq\") pod \"redhat-marketplace-76wlb\" (UID: \"a0bfc05c-680d-4e43-96e2-eed1840b26ac\") " pod="openshift-marketplace/redhat-marketplace-76wlb" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.286188 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0bfc05c-680d-4e43-96e2-eed1840b26ac-catalog-content\") pod \"redhat-marketplace-76wlb\" (UID: \"a0bfc05c-680d-4e43-96e2-eed1840b26ac\") " pod="openshift-marketplace/redhat-marketplace-76wlb" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.380157 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-kg9d6" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.386886 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdgdq\" (UniqueName: \"kubernetes.io/projected/a0bfc05c-680d-4e43-96e2-eed1840b26ac-kube-api-access-zdgdq\") pod \"redhat-marketplace-76wlb\" (UID: \"a0bfc05c-680d-4e43-96e2-eed1840b26ac\") " pod="openshift-marketplace/redhat-marketplace-76wlb" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.386929 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0bfc05c-680d-4e43-96e2-eed1840b26ac-catalog-content\") pod \"redhat-marketplace-76wlb\" (UID: \"a0bfc05c-680d-4e43-96e2-eed1840b26ac\") " pod="openshift-marketplace/redhat-marketplace-76wlb" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.387019 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0bfc05c-680d-4e43-96e2-eed1840b26ac-utilities\") pod \"redhat-marketplace-76wlb\" (UID: \"a0bfc05c-680d-4e43-96e2-eed1840b26ac\") " pod="openshift-marketplace/redhat-marketplace-76wlb" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.387559 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0bfc05c-680d-4e43-96e2-eed1840b26ac-utilities\") pod \"redhat-marketplace-76wlb\" (UID: \"a0bfc05c-680d-4e43-96e2-eed1840b26ac\") " pod="openshift-marketplace/redhat-marketplace-76wlb" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.387625 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0bfc05c-680d-4e43-96e2-eed1840b26ac-catalog-content\") pod \"redhat-marketplace-76wlb\" (UID: \"a0bfc05c-680d-4e43-96e2-eed1840b26ac\") " pod="openshift-marketplace/redhat-marketplace-76wlb" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.388436 4904 patch_prober.go:28] interesting pod/router-default-5444994796-kg9d6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 10:09:08 crc kubenswrapper[4904]: [+]has-synced ok Feb 23 10:09:08 crc kubenswrapper[4904]: [+]process-running ok Feb 23 10:09:08 crc kubenswrapper[4904]: healthz check failed Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.388477 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kg9d6" podUID="abb69e50-b84d-4499-a703-72e00ef6ff2a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.400066 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.400107 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.404584 4904 patch_prober.go:28] interesting pod/console-f9d7485db-x6bcw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.404633 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-x6bcw" podUID="abc78ff8-2055-4dbe-ae4e-67061adfe881" containerName="console" probeResult="failure" output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.420672 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdgdq\" (UniqueName: \"kubernetes.io/projected/a0bfc05c-680d-4e43-96e2-eed1840b26ac-kube-api-access-zdgdq\") pod \"redhat-marketplace-76wlb\" (UID: \"a0bfc05c-680d-4e43-96e2-eed1840b26ac\") " pod="openshift-marketplace/redhat-marketplace-76wlb" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.529505 4904 generic.go:334] "Generic (PLEG): container finished" podID="2ae8aff6-7d01-4352-b974-418342d434b3" containerID="bd52c846636280445bf4622aa3afd130953ce7fe82e423d222d6fe698ad00738" exitCode=0 Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.529612 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cf449" event={"ID":"2ae8aff6-7d01-4352-b974-418342d434b3","Type":"ContainerDied","Data":"bd52c846636280445bf4622aa3afd130953ce7fe82e423d222d6fe698ad00738"} Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.549552 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" event={"ID":"0fe2282c-11c4-4545-9301-f417bbe9dee7","Type":"ContainerStarted","Data":"c9af1dc4d88203a22bfda3f239a680effd2cd7ea2412fbf5f6a8a705b7b0a76e"} Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.549588 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" event={"ID":"0fe2282c-11c4-4545-9301-f417bbe9dee7","Type":"ContainerStarted","Data":"03ec02ab76cf8c3f527af3f65ab5cf020e1f17b973143e357d247359e4dfa5bd"} Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.550203 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.557393 4904 generic.go:334] "Generic (PLEG): container finished" podID="b9301188-d31f-45c6-a89a-7101ba4af296" containerID="914689930b6751713f5a56c77170403c31a7fa4be9495db99a4e5cc9fcd8c3d3" exitCode=0 Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.557488 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5sfd" event={"ID":"b9301188-d31f-45c6-a89a-7101ba4af296","Type":"ContainerDied","Data":"914689930b6751713f5a56c77170403c31a7fa4be9495db99a4e5cc9fcd8c3d3"} Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.557515 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5sfd" event={"ID":"b9301188-d31f-45c6-a89a-7101ba4af296","Type":"ContainerStarted","Data":"e1d662760a6595937337c02fbbd763dbfe86e45aabc5376f218e2f13ffa02a05"} Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.559937 4904 generic.go:334] "Generic (PLEG): container finished" podID="aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a" containerID="2f0fe54f7bd4da4c315d970f21f4d028ad143bb10e0401f9fa82e0eb392d425e" exitCode=0 Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.560190 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tltlv" event={"ID":"aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a","Type":"ContainerDied","Data":"2f0fe54f7bd4da4c315d970f21f4d028ad143bb10e0401f9fa82e0eb392d425e"} Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.560219 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tltlv" event={"ID":"aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a","Type":"ContainerStarted","Data":"a94f8f61577b6ebac4a89724e1ba769484c60f3f79d10ca9c7037f177b81eee0"} Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.568521 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-76wlb" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.581607 4904 generic.go:334] "Generic (PLEG): container finished" podID="d2c1c227-0297-4ba0-9acb-4690cffd0554" containerID="cfd0e62e772ceb74e5ce46e5b9578441a9e996885a7994906337a9dcac3275fc" exitCode=0 Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.581668 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-dwm5z" event={"ID":"d2c1c227-0297-4ba0-9acb-4690cffd0554","Type":"ContainerDied","Data":"cfd0e62e772ceb74e5ce46e5b9578441a9e996885a7994906337a9dcac3275fc"} Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.597594 4904 generic.go:334] "Generic (PLEG): container finished" podID="140ecd01-e571-4e02-8e3e-1c99b5d3ad42" containerID="55f7f3da2a99fb2a112baca3d266a6bc446712a83e698b1c42116c088e191304" exitCode=0 Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.597854 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"140ecd01-e571-4e02-8e3e-1c99b5d3ad42","Type":"ContainerDied","Data":"55f7f3da2a99fb2a112baca3d266a6bc446712a83e698b1c42116c088e191304"} Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.690362 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" podStartSLOduration=112.690345402 podStartE2EDuration="1m52.690345402s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:08.676744983 +0000 UTC m=+182.097118496" watchObservedRunningTime="2026-02-23 10:09:08.690345402 +0000 UTC m=+182.110718915" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.872692 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j6l6g"] Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.873836 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j6l6g" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.888813 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.899266 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d608de8-d727-4542-b6aa-4fedda2eaa3f-catalog-content\") pod \"redhat-operators-j6l6g\" (UID: \"2d608de8-d727-4542-b6aa-4fedda2eaa3f\") " pod="openshift-marketplace/redhat-operators-j6l6g" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.899303 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pklwb\" (UniqueName: \"kubernetes.io/projected/2d608de8-d727-4542-b6aa-4fedda2eaa3f-kube-api-access-pklwb\") pod \"redhat-operators-j6l6g\" (UID: \"2d608de8-d727-4542-b6aa-4fedda2eaa3f\") " pod="openshift-marketplace/redhat-operators-j6l6g" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.899396 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d608de8-d727-4542-b6aa-4fedda2eaa3f-utilities\") pod \"redhat-operators-j6l6g\" (UID: \"2d608de8-d727-4542-b6aa-4fedda2eaa3f\") " pod="openshift-marketplace/redhat-operators-j6l6g" Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.910571 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j6l6g"] Feb 23 10:09:08 crc kubenswrapper[4904]: I0223 10:09:08.953618 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mlwp"] Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.001625 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d608de8-d727-4542-b6aa-4fedda2eaa3f-utilities\") pod \"redhat-operators-j6l6g\" (UID: \"2d608de8-d727-4542-b6aa-4fedda2eaa3f\") " pod="openshift-marketplace/redhat-operators-j6l6g" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.002152 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d608de8-d727-4542-b6aa-4fedda2eaa3f-catalog-content\") pod \"redhat-operators-j6l6g\" (UID: \"2d608de8-d727-4542-b6aa-4fedda2eaa3f\") " pod="openshift-marketplace/redhat-operators-j6l6g" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.002178 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pklwb\" (UniqueName: \"kubernetes.io/projected/2d608de8-d727-4542-b6aa-4fedda2eaa3f-kube-api-access-pklwb\") pod \"redhat-operators-j6l6g\" (UID: \"2d608de8-d727-4542-b6aa-4fedda2eaa3f\") " pod="openshift-marketplace/redhat-operators-j6l6g" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.004385 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d608de8-d727-4542-b6aa-4fedda2eaa3f-utilities\") pod \"redhat-operators-j6l6g\" (UID: \"2d608de8-d727-4542-b6aa-4fedda2eaa3f\") " pod="openshift-marketplace/redhat-operators-j6l6g" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.004638 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d608de8-d727-4542-b6aa-4fedda2eaa3f-catalog-content\") pod \"redhat-operators-j6l6g\" (UID: \"2d608de8-d727-4542-b6aa-4fedda2eaa3f\") " pod="openshift-marketplace/redhat-operators-j6l6g" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.026792 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pklwb\" (UniqueName: \"kubernetes.io/projected/2d608de8-d727-4542-b6aa-4fedda2eaa3f-kube-api-access-pklwb\") pod \"redhat-operators-j6l6g\" (UID: \"2d608de8-d727-4542-b6aa-4fedda2eaa3f\") " pod="openshift-marketplace/redhat-operators-j6l6g" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.081039 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.104665 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3a42864-3170-463d-9d45-92324932b171-kubelet-dir\") pod \"c3a42864-3170-463d-9d45-92324932b171\" (UID: \"c3a42864-3170-463d-9d45-92324932b171\") " Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.104784 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3a42864-3170-463d-9d45-92324932b171-kube-api-access\") pod \"c3a42864-3170-463d-9d45-92324932b171\" (UID: \"c3a42864-3170-463d-9d45-92324932b171\") " Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.106317 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3a42864-3170-463d-9d45-92324932b171-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c3a42864-3170-463d-9d45-92324932b171" (UID: "c3a42864-3170-463d-9d45-92324932b171"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.109899 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3a42864-3170-463d-9d45-92324932b171-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c3a42864-3170-463d-9d45-92324932b171" (UID: "c3a42864-3170-463d-9d45-92324932b171"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.153556 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-76wlb"] Feb 23 10:09:09 crc kubenswrapper[4904]: W0223 10:09:09.180788 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0bfc05c_680d_4e43_96e2_eed1840b26ac.slice/crio-58942980ead3c188a73e6334dba061b907efc37c4a1f572567784fafce957085 WatchSource:0}: Error finding container 58942980ead3c188a73e6334dba061b907efc37c4a1f572567784fafce957085: Status 404 returned error can't find the container with id 58942980ead3c188a73e6334dba061b907efc37c4a1f572567784fafce957085 Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.206558 4904 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3a42864-3170-463d-9d45-92324932b171-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.206588 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3a42864-3170-463d-9d45-92324932b171-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.208376 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j6l6g" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.282359 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rl6nq"] Feb 23 10:09:09 crc kubenswrapper[4904]: E0223 10:09:09.282538 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a42864-3170-463d-9d45-92324932b171" containerName="pruner" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.282550 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a42864-3170-463d-9d45-92324932b171" containerName="pruner" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.282649 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3a42864-3170-463d-9d45-92324932b171" containerName="pruner" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.283310 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rl6nq"] Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.283391 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rl6nq" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.308865 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/202a07c0-2af3-4a11-b7c3-1913a8117e18-utilities\") pod \"redhat-operators-rl6nq\" (UID: \"202a07c0-2af3-4a11-b7c3-1913a8117e18\") " pod="openshift-marketplace/redhat-operators-rl6nq" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.308943 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/202a07c0-2af3-4a11-b7c3-1913a8117e18-catalog-content\") pod \"redhat-operators-rl6nq\" (UID: \"202a07c0-2af3-4a11-b7c3-1913a8117e18\") " pod="openshift-marketplace/redhat-operators-rl6nq" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.309088 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktzfv\" (UniqueName: \"kubernetes.io/projected/202a07c0-2af3-4a11-b7c3-1913a8117e18-kube-api-access-ktzfv\") pod \"redhat-operators-rl6nq\" (UID: \"202a07c0-2af3-4a11-b7c3-1913a8117e18\") " pod="openshift-marketplace/redhat-operators-rl6nq" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.389104 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-kg9d6" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.393657 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-kg9d6" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.410008 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/202a07c0-2af3-4a11-b7c3-1913a8117e18-utilities\") pod \"redhat-operators-rl6nq\" (UID: \"202a07c0-2af3-4a11-b7c3-1913a8117e18\") " pod="openshift-marketplace/redhat-operators-rl6nq" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.410070 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/202a07c0-2af3-4a11-b7c3-1913a8117e18-catalog-content\") pod \"redhat-operators-rl6nq\" (UID: \"202a07c0-2af3-4a11-b7c3-1913a8117e18\") " pod="openshift-marketplace/redhat-operators-rl6nq" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.410208 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktzfv\" (UniqueName: \"kubernetes.io/projected/202a07c0-2af3-4a11-b7c3-1913a8117e18-kube-api-access-ktzfv\") pod \"redhat-operators-rl6nq\" (UID: \"202a07c0-2af3-4a11-b7c3-1913a8117e18\") " pod="openshift-marketplace/redhat-operators-rl6nq" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.411143 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/202a07c0-2af3-4a11-b7c3-1913a8117e18-utilities\") pod \"redhat-operators-rl6nq\" (UID: \"202a07c0-2af3-4a11-b7c3-1913a8117e18\") " pod="openshift-marketplace/redhat-operators-rl6nq" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.415641 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/202a07c0-2af3-4a11-b7c3-1913a8117e18-catalog-content\") pod \"redhat-operators-rl6nq\" (UID: \"202a07c0-2af3-4a11-b7c3-1913a8117e18\") " pod="openshift-marketplace/redhat-operators-rl6nq" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.430597 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktzfv\" (UniqueName: \"kubernetes.io/projected/202a07c0-2af3-4a11-b7c3-1913a8117e18-kube-api-access-ktzfv\") pod \"redhat-operators-rl6nq\" (UID: \"202a07c0-2af3-4a11-b7c3-1913a8117e18\") " pod="openshift-marketplace/redhat-operators-rl6nq" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.626341 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rl6nq" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.629636 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mlwp" event={"ID":"46dff5e0-cb39-492e-95c6-33e67169ef87","Type":"ContainerStarted","Data":"87d9189f91707c41493cb9dce58f8d4e7720f614e2a8d61125f68e2a4126b967"} Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.629673 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mlwp" event={"ID":"46dff5e0-cb39-492e-95c6-33e67169ef87","Type":"ContainerStarted","Data":"e8c4df4960650366dbc0b7fea9bf5bbe671580245d19deb32560b10b7084d31c"} Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.648022 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76wlb" event={"ID":"a0bfc05c-680d-4e43-96e2-eed1840b26ac","Type":"ContainerStarted","Data":"58942980ead3c188a73e6334dba061b907efc37c4a1f572567784fafce957085"} Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.668679 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.669006 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c3a42864-3170-463d-9d45-92324932b171","Type":"ContainerDied","Data":"6552f24853b1ad145e872617aadaddbbf2efc766be99a768cbc771b6913d4967"} Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.669026 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6552f24853b1ad145e872617aadaddbbf2efc766be99a768cbc771b6913d4967" Feb 23 10:09:09 crc kubenswrapper[4904]: I0223 10:09:09.711570 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j6l6g"] Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.099701 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.131432 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-dwm5z" Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.153600 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/140ecd01-e571-4e02-8e3e-1c99b5d3ad42-kubelet-dir\") pod \"140ecd01-e571-4e02-8e3e-1c99b5d3ad42\" (UID: \"140ecd01-e571-4e02-8e3e-1c99b5d3ad42\") " Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.153690 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/140ecd01-e571-4e02-8e3e-1c99b5d3ad42-kube-api-access\") pod \"140ecd01-e571-4e02-8e3e-1c99b5d3ad42\" (UID: \"140ecd01-e571-4e02-8e3e-1c99b5d3ad42\") " Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.153801 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/140ecd01-e571-4e02-8e3e-1c99b5d3ad42-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "140ecd01-e571-4e02-8e3e-1c99b5d3ad42" (UID: "140ecd01-e571-4e02-8e3e-1c99b5d3ad42"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.154044 4904 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/140ecd01-e571-4e02-8e3e-1c99b5d3ad42-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.160418 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/140ecd01-e571-4e02-8e3e-1c99b5d3ad42-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "140ecd01-e571-4e02-8e3e-1c99b5d3ad42" (UID: "140ecd01-e571-4e02-8e3e-1c99b5d3ad42"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.160938 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rl6nq"] Feb 23 10:09:10 crc kubenswrapper[4904]: W0223 10:09:10.185991 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod202a07c0_2af3_4a11_b7c3_1913a8117e18.slice/crio-5aa79f570fbc642ef0d6e20565d41241ce327f6e6acc8aa741ef340827f30f80 WatchSource:0}: Error finding container 5aa79f570fbc642ef0d6e20565d41241ce327f6e6acc8aa741ef340827f30f80: Status 404 returned error can't find the container with id 5aa79f570fbc642ef0d6e20565d41241ce327f6e6acc8aa741ef340827f30f80 Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.255890 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2c1c227-0297-4ba0-9acb-4690cffd0554-secret-volume\") pod \"d2c1c227-0297-4ba0-9acb-4690cffd0554\" (UID: \"d2c1c227-0297-4ba0-9acb-4690cffd0554\") " Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.255936 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwhmz\" (UniqueName: \"kubernetes.io/projected/d2c1c227-0297-4ba0-9acb-4690cffd0554-kube-api-access-fwhmz\") pod \"d2c1c227-0297-4ba0-9acb-4690cffd0554\" (UID: \"d2c1c227-0297-4ba0-9acb-4690cffd0554\") " Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.255962 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2c1c227-0297-4ba0-9acb-4690cffd0554-config-volume\") pod \"d2c1c227-0297-4ba0-9acb-4690cffd0554\" (UID: \"d2c1c227-0297-4ba0-9acb-4690cffd0554\") " Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.256288 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/140ecd01-e571-4e02-8e3e-1c99b5d3ad42-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.258200 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2c1c227-0297-4ba0-9acb-4690cffd0554-config-volume" (OuterVolumeSpecName: "config-volume") pod "d2c1c227-0297-4ba0-9acb-4690cffd0554" (UID: "d2c1c227-0297-4ba0-9acb-4690cffd0554"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.259956 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2c1c227-0297-4ba0-9acb-4690cffd0554-kube-api-access-fwhmz" (OuterVolumeSpecName: "kube-api-access-fwhmz") pod "d2c1c227-0297-4ba0-9acb-4690cffd0554" (UID: "d2c1c227-0297-4ba0-9acb-4690cffd0554"). InnerVolumeSpecName "kube-api-access-fwhmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.261636 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c1c227-0297-4ba0-9acb-4690cffd0554-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d2c1c227-0297-4ba0-9acb-4690cffd0554" (UID: "d2c1c227-0297-4ba0-9acb-4690cffd0554"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.363700 4904 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2c1c227-0297-4ba0-9acb-4690cffd0554-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.363747 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwhmz\" (UniqueName: \"kubernetes.io/projected/d2c1c227-0297-4ba0-9acb-4690cffd0554-kube-api-access-fwhmz\") on node \"crc\" DevicePath \"\"" Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.363756 4904 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2c1c227-0297-4ba0-9acb-4690cffd0554-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.679572 4904 generic.go:334] "Generic (PLEG): container finished" podID="2d608de8-d727-4542-b6aa-4fedda2eaa3f" containerID="ccf44b1147dd1cfa9e51a9a4bf3e5b382eca83539c94edd06f227d8c5a0be1b2" exitCode=0 Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.679641 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6l6g" event={"ID":"2d608de8-d727-4542-b6aa-4fedda2eaa3f","Type":"ContainerDied","Data":"ccf44b1147dd1cfa9e51a9a4bf3e5b382eca83539c94edd06f227d8c5a0be1b2"} Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.679701 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6l6g" event={"ID":"2d608de8-d727-4542-b6aa-4fedda2eaa3f","Type":"ContainerStarted","Data":"713182297aaf7f71996b6430eca0604e06d9b5f792f07ea387a7c45d714e5834"} Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.682403 4904 generic.go:334] "Generic (PLEG): container finished" podID="202a07c0-2af3-4a11-b7c3-1913a8117e18" containerID="166c719ec70def24f9b6cc3292dba9b6e84027635138312f7402f996ccb9cba6" exitCode=0 Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.682486 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rl6nq" event={"ID":"202a07c0-2af3-4a11-b7c3-1913a8117e18","Type":"ContainerDied","Data":"166c719ec70def24f9b6cc3292dba9b6e84027635138312f7402f996ccb9cba6"} Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.682532 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rl6nq" event={"ID":"202a07c0-2af3-4a11-b7c3-1913a8117e18","Type":"ContainerStarted","Data":"5aa79f570fbc642ef0d6e20565d41241ce327f6e6acc8aa741ef340827f30f80"} Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.686669 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-dwm5z" event={"ID":"d2c1c227-0297-4ba0-9acb-4690cffd0554","Type":"ContainerDied","Data":"ce882cf8ff2af1e7ca0a773117291cc129d0e6e46d6e94419291afe0edfd9cc6"} Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.686773 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce882cf8ff2af1e7ca0a773117291cc129d0e6e46d6e94419291afe0edfd9cc6" Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.687012 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-dwm5z" Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.690176 4904 generic.go:334] "Generic (PLEG): container finished" podID="46dff5e0-cb39-492e-95c6-33e67169ef87" containerID="87d9189f91707c41493cb9dce58f8d4e7720f614e2a8d61125f68e2a4126b967" exitCode=0 Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.690227 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mlwp" event={"ID":"46dff5e0-cb39-492e-95c6-33e67169ef87","Type":"ContainerDied","Data":"87d9189f91707c41493cb9dce58f8d4e7720f614e2a8d61125f68e2a4126b967"} Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.691897 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"140ecd01-e571-4e02-8e3e-1c99b5d3ad42","Type":"ContainerDied","Data":"aea577df6d0785308828725707ce3bfd198bad153bf1bac86ee27ab869d82fbc"} Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.691921 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aea577df6d0785308828725707ce3bfd198bad153bf1bac86ee27ab869d82fbc" Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.691972 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.715380 4904 generic.go:334] "Generic (PLEG): container finished" podID="a0bfc05c-680d-4e43-96e2-eed1840b26ac" containerID="a8473b15fa9fe09106db974fe4f724f1c9af2603069980697fcc4030af284453" exitCode=0 Feb 23 10:09:10 crc kubenswrapper[4904]: I0223 10:09:10.715535 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76wlb" event={"ID":"a0bfc05c-680d-4e43-96e2-eed1840b26ac","Type":"ContainerDied","Data":"a8473b15fa9fe09106db974fe4f724f1c9af2603069980697fcc4030af284453"} Feb 23 10:09:13 crc kubenswrapper[4904]: I0223 10:09:13.256911 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-2lhs9" Feb 23 10:09:17 crc kubenswrapper[4904]: I0223 10:09:17.421534 4904 patch_prober.go:28] interesting pod/downloads-7954f5f757-jnvdm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 23 10:09:17 crc kubenswrapper[4904]: I0223 10:09:17.421558 4904 patch_prober.go:28] interesting pod/downloads-7954f5f757-jnvdm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 23 10:09:17 crc kubenswrapper[4904]: I0223 10:09:17.421600 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jnvdm" podUID="945901ad-f721-4897-bca6-16436563e92c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 23 10:09:17 crc kubenswrapper[4904]: I0223 10:09:17.421607 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jnvdm" podUID="945901ad-f721-4897-bca6-16436563e92c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 23 10:09:18 crc kubenswrapper[4904]: I0223 10:09:18.413620 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:09:18 crc kubenswrapper[4904]: I0223 10:09:18.418694 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:09:27 crc kubenswrapper[4904]: I0223 10:09:27.047002 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:09:27 crc kubenswrapper[4904]: I0223 10:09:27.422001 4904 patch_prober.go:28] interesting pod/downloads-7954f5f757-jnvdm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 23 10:09:27 crc kubenswrapper[4904]: I0223 10:09:27.422009 4904 patch_prober.go:28] interesting pod/downloads-7954f5f757-jnvdm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 23 10:09:27 crc kubenswrapper[4904]: I0223 10:09:27.422051 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jnvdm" podUID="945901ad-f721-4897-bca6-16436563e92c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 23 10:09:27 crc kubenswrapper[4904]: I0223 10:09:27.422075 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jnvdm" podUID="945901ad-f721-4897-bca6-16436563e92c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 23 10:09:27 crc kubenswrapper[4904]: I0223 10:09:27.422122 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-jnvdm" Feb 23 10:09:27 crc kubenswrapper[4904]: I0223 10:09:27.422585 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"a877210a05329bb4e61ae39df6f0af1fcf4556e20a56e7c0a1a2e599a8d5715b"} pod="openshift-console/downloads-7954f5f757-jnvdm" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 23 10:09:27 crc kubenswrapper[4904]: I0223 10:09:27.422613 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-jnvdm" podUID="945901ad-f721-4897-bca6-16436563e92c" containerName="download-server" containerID="cri-o://a877210a05329bb4e61ae39df6f0af1fcf4556e20a56e7c0a1a2e599a8d5715b" gracePeriod=2 Feb 23 10:09:27 crc kubenswrapper[4904]: I0223 10:09:27.422796 4904 patch_prober.go:28] interesting pod/downloads-7954f5f757-jnvdm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 23 10:09:27 crc kubenswrapper[4904]: I0223 10:09:27.422846 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jnvdm" podUID="945901ad-f721-4897-bca6-16436563e92c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 23 10:09:28 crc kubenswrapper[4904]: I0223 10:09:28.009694 4904 generic.go:334] "Generic (PLEG): container finished" podID="945901ad-f721-4897-bca6-16436563e92c" containerID="a877210a05329bb4e61ae39df6f0af1fcf4556e20a56e7c0a1a2e599a8d5715b" exitCode=0 Feb 23 10:09:28 crc kubenswrapper[4904]: I0223 10:09:28.009751 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jnvdm" event={"ID":"945901ad-f721-4897-bca6-16436563e92c","Type":"ContainerDied","Data":"a877210a05329bb4e61ae39df6f0af1fcf4556e20a56e7c0a1a2e599a8d5715b"} Feb 23 10:09:30 crc kubenswrapper[4904]: I0223 10:09:30.133585 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:09:30 crc kubenswrapper[4904]: I0223 10:09:30.135921 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 23 10:09:30 crc kubenswrapper[4904]: I0223 10:09:30.150172 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:09:30 crc kubenswrapper[4904]: I0223 10:09:30.234943 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:09:30 crc kubenswrapper[4904]: I0223 10:09:30.235173 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:09:30 crc kubenswrapper[4904]: I0223 10:09:30.235202 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:09:30 crc kubenswrapper[4904]: I0223 10:09:30.237279 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 23 10:09:30 crc kubenswrapper[4904]: I0223 10:09:30.237599 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 23 10:09:30 crc kubenswrapper[4904]: I0223 10:09:30.247544 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 23 10:09:30 crc kubenswrapper[4904]: I0223 10:09:30.248131 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:09:30 crc kubenswrapper[4904]: I0223 10:09:30.263556 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:09:30 crc kubenswrapper[4904]: I0223 10:09:30.264069 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:09:30 crc kubenswrapper[4904]: I0223 10:09:30.268103 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 10:09:30 crc kubenswrapper[4904]: I0223 10:09:30.273691 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:09:30 crc kubenswrapper[4904]: I0223 10:09:30.378860 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 10:09:37 crc kubenswrapper[4904]: I0223 10:09:37.428139 4904 patch_prober.go:28] interesting pod/downloads-7954f5f757-jnvdm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 23 10:09:37 crc kubenswrapper[4904]: I0223 10:09:37.428679 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jnvdm" podUID="945901ad-f721-4897-bca6-16436563e92c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 23 10:09:38 crc kubenswrapper[4904]: I0223 10:09:38.183491 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vfhxr" Feb 23 10:09:40 crc kubenswrapper[4904]: E0223 10:09:40.314705 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 23 10:09:40 crc kubenswrapper[4904]: E0223 10:09:40.315289 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82c52,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8f9wz_openshift-marketplace(e82107db-3789-4ed9-8b7a-7dc968cb833f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 10:09:40 crc kubenswrapper[4904]: E0223 10:09:40.317318 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-8f9wz" podUID="e82107db-3789-4ed9-8b7a-7dc968cb833f" Feb 23 10:09:40 crc kubenswrapper[4904]: E0223 10:09:40.366216 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 23 10:09:40 crc kubenswrapper[4904]: E0223 10:09:40.366395 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktzfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-rl6nq_openshift-marketplace(202a07c0-2af3-4a11-b7c3-1913a8117e18): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 10:09:40 crc kubenswrapper[4904]: E0223 10:09:40.368209 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-rl6nq" podUID="202a07c0-2af3-4a11-b7c3-1913a8117e18" Feb 23 10:09:41 crc kubenswrapper[4904]: E0223 10:09:41.642543 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8f9wz" podUID="e82107db-3789-4ed9-8b7a-7dc968cb833f" Feb 23 10:09:41 crc kubenswrapper[4904]: E0223 10:09:41.642550 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-rl6nq" podUID="202a07c0-2af3-4a11-b7c3-1913a8117e18" Feb 23 10:09:41 crc kubenswrapper[4904]: E0223 10:09:41.712104 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 23 10:09:41 crc kubenswrapper[4904]: E0223 10:09:41.712515 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f64dk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-cf449_openshift-marketplace(2ae8aff6-7d01-4352-b974-418342d434b3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 10:09:41 crc kubenswrapper[4904]: E0223 10:09:41.713928 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-cf449" podUID="2ae8aff6-7d01-4352-b974-418342d434b3" Feb 23 10:09:41 crc kubenswrapper[4904]: I0223 10:09:41.729504 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 23 10:09:41 crc kubenswrapper[4904]: E0223 10:09:41.729767 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="140ecd01-e571-4e02-8e3e-1c99b5d3ad42" containerName="pruner" Feb 23 10:09:41 crc kubenswrapper[4904]: I0223 10:09:41.729778 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="140ecd01-e571-4e02-8e3e-1c99b5d3ad42" containerName="pruner" Feb 23 10:09:41 crc kubenswrapper[4904]: E0223 10:09:41.729789 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c1c227-0297-4ba0-9acb-4690cffd0554" containerName="collect-profiles" Feb 23 10:09:41 crc kubenswrapper[4904]: I0223 10:09:41.729795 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c1c227-0297-4ba0-9acb-4690cffd0554" containerName="collect-profiles" Feb 23 10:09:41 crc kubenswrapper[4904]: I0223 10:09:41.729891 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c1c227-0297-4ba0-9acb-4690cffd0554" containerName="collect-profiles" Feb 23 10:09:41 crc kubenswrapper[4904]: I0223 10:09:41.729905 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="140ecd01-e571-4e02-8e3e-1c99b5d3ad42" containerName="pruner" Feb 23 10:09:41 crc kubenswrapper[4904]: I0223 10:09:41.730264 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 10:09:41 crc kubenswrapper[4904]: I0223 10:09:41.732367 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 10:09:41 crc kubenswrapper[4904]: I0223 10:09:41.732550 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 23 10:09:41 crc kubenswrapper[4904]: I0223 10:09:41.750335 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 23 10:09:41 crc kubenswrapper[4904]: I0223 10:09:41.786127 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8bfc35c-8181-4e21-9c4d-a834cf46a4b0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c8bfc35c-8181-4e21-9c4d-a834cf46a4b0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 10:09:41 crc kubenswrapper[4904]: I0223 10:09:41.786185 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8bfc35c-8181-4e21-9c4d-a834cf46a4b0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c8bfc35c-8181-4e21-9c4d-a834cf46a4b0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 10:09:41 crc kubenswrapper[4904]: E0223 10:09:41.788321 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 23 10:09:41 crc kubenswrapper[4904]: E0223 10:09:41.788488 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fsd75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-x5sfd_openshift-marketplace(b9301188-d31f-45c6-a89a-7101ba4af296): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 10:09:41 crc kubenswrapper[4904]: E0223 10:09:41.790121 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-x5sfd" podUID="b9301188-d31f-45c6-a89a-7101ba4af296" Feb 23 10:09:41 crc kubenswrapper[4904]: I0223 10:09:41.887168 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8bfc35c-8181-4e21-9c4d-a834cf46a4b0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c8bfc35c-8181-4e21-9c4d-a834cf46a4b0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 10:09:41 crc kubenswrapper[4904]: I0223 10:09:41.887595 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8bfc35c-8181-4e21-9c4d-a834cf46a4b0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c8bfc35c-8181-4e21-9c4d-a834cf46a4b0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 10:09:41 crc kubenswrapper[4904]: I0223 10:09:41.887325 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8bfc35c-8181-4e21-9c4d-a834cf46a4b0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c8bfc35c-8181-4e21-9c4d-a834cf46a4b0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 10:09:41 crc kubenswrapper[4904]: I0223 10:09:41.916569 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8bfc35c-8181-4e21-9c4d-a834cf46a4b0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c8bfc35c-8181-4e21-9c4d-a834cf46a4b0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 10:09:41 crc kubenswrapper[4904]: W0223 10:09:41.987684 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-54aa012f51b1bb60464b1498769279945ed07f4839edc213d50f03fe4c2b7e68 WatchSource:0}: Error finding container 54aa012f51b1bb60464b1498769279945ed07f4839edc213d50f03fe4c2b7e68: Status 404 returned error can't find the container with id 54aa012f51b1bb60464b1498769279945ed07f4839edc213d50f03fe4c2b7e68 Feb 23 10:09:42 crc kubenswrapper[4904]: I0223 10:09:42.084755 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76wlb" event={"ID":"a0bfc05c-680d-4e43-96e2-eed1840b26ac","Type":"ContainerStarted","Data":"054cb4609d91ba2603fd4bac6f9befbd547343cddbe7249d46109799446405b4"} Feb 23 10:09:42 crc kubenswrapper[4904]: I0223 10:09:42.085502 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 10:09:42 crc kubenswrapper[4904]: I0223 10:09:42.095348 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6l6g" event={"ID":"2d608de8-d727-4542-b6aa-4fedda2eaa3f","Type":"ContainerStarted","Data":"0b3f67940aba8b56fbb10735874feb708cb44320b7792db52f78708de7c04ab4"} Feb 23 10:09:42 crc kubenswrapper[4904]: I0223 10:09:42.098094 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"54aa012f51b1bb60464b1498769279945ed07f4839edc213d50f03fe4c2b7e68"} Feb 23 10:09:42 crc kubenswrapper[4904]: I0223 10:09:42.099692 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jnvdm" event={"ID":"945901ad-f721-4897-bca6-16436563e92c","Type":"ContainerStarted","Data":"e1dbe054f6d17093c42fb1a3cd20e146fe8f50e11b0899b0ced1cb2bd73e5751"} Feb 23 10:09:42 crc kubenswrapper[4904]: I0223 10:09:42.101055 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-jnvdm" Feb 23 10:09:42 crc kubenswrapper[4904]: I0223 10:09:42.101269 4904 patch_prober.go:28] interesting pod/downloads-7954f5f757-jnvdm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 23 10:09:42 crc kubenswrapper[4904]: I0223 10:09:42.101304 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jnvdm" podUID="945901ad-f721-4897-bca6-16436563e92c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 23 10:09:42 crc kubenswrapper[4904]: I0223 10:09:42.117038 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"87fb5fadf570e136e1e1d24e2b90e1d2b5952de916db365a790e29637662144c"} Feb 23 10:09:42 crc kubenswrapper[4904]: I0223 10:09:42.126555 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tltlv" event={"ID":"aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a","Type":"ContainerStarted","Data":"fce87d8098209527082988def67e7010e3dd53adb61adf94d26b919876d571f2"} Feb 23 10:09:42 crc kubenswrapper[4904]: I0223 10:09:42.135031 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mlwp" event={"ID":"46dff5e0-cb39-492e-95c6-33e67169ef87","Type":"ContainerStarted","Data":"8b142c1fa1d2de05a6f8507096e91594bac772cd8726e5dd2901170de7bb71cc"} Feb 23 10:09:42 crc kubenswrapper[4904]: E0223 10:09:42.139376 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-x5sfd" podUID="b9301188-d31f-45c6-a89a-7101ba4af296" Feb 23 10:09:42 crc kubenswrapper[4904]: E0223 10:09:42.139617 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-cf449" podUID="2ae8aff6-7d01-4352-b974-418342d434b3" Feb 23 10:09:42 crc kubenswrapper[4904]: W0223 10:09:42.170381 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-ac2f38cd3f85882f34ff5f3f5e390a3b834fd11b8962d11bd400498dfcad644a WatchSource:0}: Error finding container ac2f38cd3f85882f34ff5f3f5e390a3b834fd11b8962d11bd400498dfcad644a: Status 404 returned error can't find the container with id ac2f38cd3f85882f34ff5f3f5e390a3b834fd11b8962d11bd400498dfcad644a Feb 23 10:09:42 crc kubenswrapper[4904]: I0223 10:09:42.384948 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 23 10:09:42 crc kubenswrapper[4904]: W0223 10:09:42.424499 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc8bfc35c_8181_4e21_9c4d_a834cf46a4b0.slice/crio-83895cc0bf2b25d26ec334cd9cf8a0c33bca613f190ecd58812d6170ffcd6144 WatchSource:0}: Error finding container 83895cc0bf2b25d26ec334cd9cf8a0c33bca613f190ecd58812d6170ffcd6144: Status 404 returned error can't find the container with id 83895cc0bf2b25d26ec334cd9cf8a0c33bca613f190ecd58812d6170ffcd6144 Feb 23 10:09:43 crc kubenswrapper[4904]: I0223 10:09:43.140057 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"798ebaeb1718d89322573222709f32b0310ae392072360fc667f834992e87974"} Feb 23 10:09:43 crc kubenswrapper[4904]: I0223 10:09:43.143171 4904 generic.go:334] "Generic (PLEG): container finished" podID="aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a" containerID="fce87d8098209527082988def67e7010e3dd53adb61adf94d26b919876d571f2" exitCode=0 Feb 23 10:09:43 crc kubenswrapper[4904]: I0223 10:09:43.143225 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tltlv" event={"ID":"aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a","Type":"ContainerDied","Data":"fce87d8098209527082988def67e7010e3dd53adb61adf94d26b919876d571f2"} Feb 23 10:09:43 crc kubenswrapper[4904]: I0223 10:09:43.149866 4904 generic.go:334] "Generic (PLEG): container finished" podID="46dff5e0-cb39-492e-95c6-33e67169ef87" containerID="8b142c1fa1d2de05a6f8507096e91594bac772cd8726e5dd2901170de7bb71cc" exitCode=0 Feb 23 10:09:43 crc kubenswrapper[4904]: I0223 10:09:43.149943 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mlwp" event={"ID":"46dff5e0-cb39-492e-95c6-33e67169ef87","Type":"ContainerDied","Data":"8b142c1fa1d2de05a6f8507096e91594bac772cd8726e5dd2901170de7bb71cc"} Feb 23 10:09:43 crc kubenswrapper[4904]: I0223 10:09:43.151497 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"817dcd77d84e1dc20123c21121ae8bca4687ca792072a0d39ce3720c6a61a743"} Feb 23 10:09:43 crc kubenswrapper[4904]: I0223 10:09:43.151530 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ac2f38cd3f85882f34ff5f3f5e390a3b834fd11b8962d11bd400498dfcad644a"} Feb 23 10:09:43 crc kubenswrapper[4904]: I0223 10:09:43.151689 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:09:43 crc kubenswrapper[4904]: I0223 10:09:43.182659 4904 generic.go:334] "Generic (PLEG): container finished" podID="a0bfc05c-680d-4e43-96e2-eed1840b26ac" containerID="054cb4609d91ba2603fd4bac6f9befbd547343cddbe7249d46109799446405b4" exitCode=0 Feb 23 10:09:43 crc kubenswrapper[4904]: I0223 10:09:43.182764 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76wlb" event={"ID":"a0bfc05c-680d-4e43-96e2-eed1840b26ac","Type":"ContainerDied","Data":"054cb4609d91ba2603fd4bac6f9befbd547343cddbe7249d46109799446405b4"} Feb 23 10:09:43 crc kubenswrapper[4904]: I0223 10:09:43.205975 4904 generic.go:334] "Generic (PLEG): container finished" podID="2d608de8-d727-4542-b6aa-4fedda2eaa3f" containerID="0b3f67940aba8b56fbb10735874feb708cb44320b7792db52f78708de7c04ab4" exitCode=0 Feb 23 10:09:43 crc kubenswrapper[4904]: I0223 10:09:43.206099 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6l6g" event={"ID":"2d608de8-d727-4542-b6aa-4fedda2eaa3f","Type":"ContainerDied","Data":"0b3f67940aba8b56fbb10735874feb708cb44320b7792db52f78708de7c04ab4"} Feb 23 10:09:43 crc kubenswrapper[4904]: I0223 10:09:43.213576 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7d0604a530448c3675e5213966acd7a5fb671798803f58b9eec89a31b697871d"} Feb 23 10:09:43 crc kubenswrapper[4904]: I0223 10:09:43.216407 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c8bfc35c-8181-4e21-9c4d-a834cf46a4b0","Type":"ContainerStarted","Data":"fbaa188bc28514324105cd8d82d9be628078e2c205b80f0899c0d17fa9ad3f4f"} Feb 23 10:09:43 crc kubenswrapper[4904]: I0223 10:09:43.216438 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c8bfc35c-8181-4e21-9c4d-a834cf46a4b0","Type":"ContainerStarted","Data":"83895cc0bf2b25d26ec334cd9cf8a0c33bca613f190ecd58812d6170ffcd6144"} Feb 23 10:09:43 crc kubenswrapper[4904]: I0223 10:09:43.218273 4904 patch_prober.go:28] interesting pod/downloads-7954f5f757-jnvdm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 23 10:09:43 crc kubenswrapper[4904]: I0223 10:09:43.218315 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jnvdm" podUID="945901ad-f721-4897-bca6-16436563e92c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 23 10:09:43 crc kubenswrapper[4904]: I0223 10:09:43.280603 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.280579731 podStartE2EDuration="2.280579731s" podCreationTimestamp="2026-02-23 10:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:43.277545475 +0000 UTC m=+216.697918998" watchObservedRunningTime="2026-02-23 10:09:43.280579731 +0000 UTC m=+216.700953244" Feb 23 10:09:44 crc kubenswrapper[4904]: I0223 10:09:44.222261 4904 generic.go:334] "Generic (PLEG): container finished" podID="c8bfc35c-8181-4e21-9c4d-a834cf46a4b0" containerID="fbaa188bc28514324105cd8d82d9be628078e2c205b80f0899c0d17fa9ad3f4f" exitCode=0 Feb 23 10:09:44 crc kubenswrapper[4904]: I0223 10:09:44.222309 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c8bfc35c-8181-4e21-9c4d-a834cf46a4b0","Type":"ContainerDied","Data":"fbaa188bc28514324105cd8d82d9be628078e2c205b80f0899c0d17fa9ad3f4f"} Feb 23 10:09:44 crc kubenswrapper[4904]: I0223 10:09:44.224321 4904 patch_prober.go:28] interesting pod/downloads-7954f5f757-jnvdm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 23 10:09:44 crc kubenswrapper[4904]: I0223 10:09:44.224365 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jnvdm" podUID="945901ad-f721-4897-bca6-16436563e92c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 23 10:09:45 crc kubenswrapper[4904]: I0223 10:09:45.228645 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tltlv" event={"ID":"aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a","Type":"ContainerStarted","Data":"6ff121210cf5e40eb1f136a57e15d75f090348411e3765350b3e44b245e2594e"} Feb 23 10:09:45 crc kubenswrapper[4904]: I0223 10:09:45.256135 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tltlv" podStartSLOduration=3.369638897 podStartE2EDuration="39.256089026s" podCreationTimestamp="2026-02-23 10:09:06 +0000 UTC" firstStartedPulling="2026-02-23 10:09:08.563472912 +0000 UTC m=+181.983846425" lastFinishedPulling="2026-02-23 10:09:44.449923041 +0000 UTC m=+217.870296554" observedRunningTime="2026-02-23 10:09:45.254236914 +0000 UTC m=+218.674610427" watchObservedRunningTime="2026-02-23 10:09:45.256089026 +0000 UTC m=+218.676462539" Feb 23 10:09:45 crc kubenswrapper[4904]: I0223 10:09:45.594988 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 10:09:45 crc kubenswrapper[4904]: I0223 10:09:45.742447 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8bfc35c-8181-4e21-9c4d-a834cf46a4b0-kubelet-dir\") pod \"c8bfc35c-8181-4e21-9c4d-a834cf46a4b0\" (UID: \"c8bfc35c-8181-4e21-9c4d-a834cf46a4b0\") " Feb 23 10:09:45 crc kubenswrapper[4904]: I0223 10:09:45.742772 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8bfc35c-8181-4e21-9c4d-a834cf46a4b0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c8bfc35c-8181-4e21-9c4d-a834cf46a4b0" (UID: "c8bfc35c-8181-4e21-9c4d-a834cf46a4b0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:09:45 crc kubenswrapper[4904]: I0223 10:09:45.742891 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8bfc35c-8181-4e21-9c4d-a834cf46a4b0-kube-api-access\") pod \"c8bfc35c-8181-4e21-9c4d-a834cf46a4b0\" (UID: \"c8bfc35c-8181-4e21-9c4d-a834cf46a4b0\") " Feb 23 10:09:45 crc kubenswrapper[4904]: I0223 10:09:45.743136 4904 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8bfc35c-8181-4e21-9c4d-a834cf46a4b0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 23 10:09:45 crc kubenswrapper[4904]: I0223 10:09:45.749820 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8bfc35c-8181-4e21-9c4d-a834cf46a4b0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c8bfc35c-8181-4e21-9c4d-a834cf46a4b0" (UID: "c8bfc35c-8181-4e21-9c4d-a834cf46a4b0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:09:45 crc kubenswrapper[4904]: I0223 10:09:45.844744 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8bfc35c-8181-4e21-9c4d-a834cf46a4b0-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 10:09:45 crc kubenswrapper[4904]: I0223 10:09:45.934317 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-p5d7h"] Feb 23 10:09:46 crc kubenswrapper[4904]: I0223 10:09:46.233974 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6l6g" event={"ID":"2d608de8-d727-4542-b6aa-4fedda2eaa3f","Type":"ContainerStarted","Data":"3364027a197a57b7410e29e5b581fdf5b25d085cd360bacdc181ac5034be108b"} Feb 23 10:09:46 crc kubenswrapper[4904]: I0223 10:09:46.236350 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c8bfc35c-8181-4e21-9c4d-a834cf46a4b0","Type":"ContainerDied","Data":"83895cc0bf2b25d26ec334cd9cf8a0c33bca613f190ecd58812d6170ffcd6144"} Feb 23 10:09:46 crc kubenswrapper[4904]: I0223 10:09:46.236382 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83895cc0bf2b25d26ec334cd9cf8a0c33bca613f190ecd58812d6170ffcd6144" Feb 23 10:09:46 crc kubenswrapper[4904]: I0223 10:09:46.236426 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 10:09:46 crc kubenswrapper[4904]: I0223 10:09:46.242277 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mlwp" event={"ID":"46dff5e0-cb39-492e-95c6-33e67169ef87","Type":"ContainerStarted","Data":"ef29420b80436415ec0be25f2f51f11011887ab8c524980d67832f2ceb183096"} Feb 23 10:09:46 crc kubenswrapper[4904]: I0223 10:09:46.262082 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j6l6g" podStartSLOduration=3.764836088 podStartE2EDuration="38.262067323s" podCreationTimestamp="2026-02-23 10:09:08 +0000 UTC" firstStartedPulling="2026-02-23 10:09:10.681603876 +0000 UTC m=+184.101977389" lastFinishedPulling="2026-02-23 10:09:45.178835111 +0000 UTC m=+218.599208624" observedRunningTime="2026-02-23 10:09:46.258177072 +0000 UTC m=+219.678550595" watchObservedRunningTime="2026-02-23 10:09:46.262067323 +0000 UTC m=+219.682440836" Feb 23 10:09:46 crc kubenswrapper[4904]: I0223 10:09:46.279650 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6mlwp" podStartSLOduration=3.109843348 podStartE2EDuration="39.279632092s" podCreationTimestamp="2026-02-23 10:09:07 +0000 UTC" firstStartedPulling="2026-02-23 10:09:09.647634859 +0000 UTC m=+183.068008372" lastFinishedPulling="2026-02-23 10:09:45.817423603 +0000 UTC m=+219.237797116" observedRunningTime="2026-02-23 10:09:46.276482632 +0000 UTC m=+219.696856145" watchObservedRunningTime="2026-02-23 10:09:46.279632092 +0000 UTC m=+219.700005605" Feb 23 10:09:46 crc kubenswrapper[4904]: I0223 10:09:46.388786 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tltlv" Feb 23 10:09:46 crc kubenswrapper[4904]: I0223 10:09:46.388866 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tltlv" Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.250657 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76wlb" event={"ID":"a0bfc05c-680d-4e43-96e2-eed1840b26ac","Type":"ContainerStarted","Data":"609b61f6c6bf9fe5b49fdcd9f04bbbcbbe73b1f90224fcf1de39493039fdf2c8"} Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.274104 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-76wlb" podStartSLOduration=3.501625668 podStartE2EDuration="39.274087041s" podCreationTimestamp="2026-02-23 10:09:08 +0000 UTC" firstStartedPulling="2026-02-23 10:09:10.717092666 +0000 UTC m=+184.137466189" lastFinishedPulling="2026-02-23 10:09:46.489554039 +0000 UTC m=+219.909927562" observedRunningTime="2026-02-23 10:09:47.271460326 +0000 UTC m=+220.691833849" watchObservedRunningTime="2026-02-23 10:09:47.274087041 +0000 UTC m=+220.694460554" Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.397981 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.398245 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.421472 4904 patch_prober.go:28] interesting pod/downloads-7954f5f757-jnvdm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.421468 4904 patch_prober.go:28] interesting pod/downloads-7954f5f757-jnvdm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.422022 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jnvdm" podUID="945901ad-f721-4897-bca6-16436563e92c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.422203 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jnvdm" podUID="945901ad-f721-4897-bca6-16436563e92c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.729931 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 23 10:09:47 crc kubenswrapper[4904]: E0223 10:09:47.730860 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8bfc35c-8181-4e21-9c4d-a834cf46a4b0" containerName="pruner" Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.730957 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8bfc35c-8181-4e21-9c4d-a834cf46a4b0" containerName="pruner" Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.731173 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8bfc35c-8181-4e21-9c4d-a834cf46a4b0" containerName="pruner" Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.731699 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.733980 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.735950 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.750219 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.786261 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/acf67771-0a1e-4d5d-8003-5d46e639e02a-kube-api-access\") pod \"installer-9-crc\" (UID: \"acf67771-0a1e-4d5d-8003-5d46e639e02a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.786323 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/acf67771-0a1e-4d5d-8003-5d46e639e02a-var-lock\") pod \"installer-9-crc\" (UID: \"acf67771-0a1e-4d5d-8003-5d46e639e02a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.786351 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/acf67771-0a1e-4d5d-8003-5d46e639e02a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"acf67771-0a1e-4d5d-8003-5d46e639e02a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.827618 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tltlv" podUID="aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a" containerName="registry-server" probeResult="failure" output=< Feb 23 10:09:47 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Feb 23 10:09:47 crc kubenswrapper[4904]: > Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.887328 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/acf67771-0a1e-4d5d-8003-5d46e639e02a-kube-api-access\") pod \"installer-9-crc\" (UID: \"acf67771-0a1e-4d5d-8003-5d46e639e02a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.887414 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/acf67771-0a1e-4d5d-8003-5d46e639e02a-var-lock\") pod \"installer-9-crc\" (UID: \"acf67771-0a1e-4d5d-8003-5d46e639e02a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.887443 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/acf67771-0a1e-4d5d-8003-5d46e639e02a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"acf67771-0a1e-4d5d-8003-5d46e639e02a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.887518 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/acf67771-0a1e-4d5d-8003-5d46e639e02a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"acf67771-0a1e-4d5d-8003-5d46e639e02a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.888146 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/acf67771-0a1e-4d5d-8003-5d46e639e02a-var-lock\") pod \"installer-9-crc\" (UID: \"acf67771-0a1e-4d5d-8003-5d46e639e02a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 10:09:47 crc kubenswrapper[4904]: I0223 10:09:47.912172 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/acf67771-0a1e-4d5d-8003-5d46e639e02a-kube-api-access\") pod \"installer-9-crc\" (UID: \"acf67771-0a1e-4d5d-8003-5d46e639e02a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 10:09:48 crc kubenswrapper[4904]: I0223 10:09:48.048914 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 23 10:09:48 crc kubenswrapper[4904]: I0223 10:09:48.279371 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6mlwp" Feb 23 10:09:48 crc kubenswrapper[4904]: I0223 10:09:48.279881 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6mlwp" Feb 23 10:09:48 crc kubenswrapper[4904]: I0223 10:09:48.568881 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-76wlb" Feb 23 10:09:48 crc kubenswrapper[4904]: I0223 10:09:48.568922 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-76wlb" Feb 23 10:09:48 crc kubenswrapper[4904]: I0223 10:09:48.665941 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 23 10:09:49 crc kubenswrapper[4904]: I0223 10:09:49.209429 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j6l6g" Feb 23 10:09:49 crc kubenswrapper[4904]: I0223 10:09:49.209482 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j6l6g" Feb 23 10:09:49 crc kubenswrapper[4904]: I0223 10:09:49.268775 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"acf67771-0a1e-4d5d-8003-5d46e639e02a","Type":"ContainerStarted","Data":"bb04ec70171af7a032fbcda25726419b5fa3ff4cf03195d7905a1c7cf2188ccd"} Feb 23 10:09:49 crc kubenswrapper[4904]: I0223 10:09:49.373111 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-6mlwp" podUID="46dff5e0-cb39-492e-95c6-33e67169ef87" containerName="registry-server" probeResult="failure" output=< Feb 23 10:09:49 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Feb 23 10:09:49 crc kubenswrapper[4904]: > Feb 23 10:09:49 crc kubenswrapper[4904]: I0223 10:09:49.628764 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-76wlb" podUID="a0bfc05c-680d-4e43-96e2-eed1840b26ac" containerName="registry-server" probeResult="failure" output=< Feb 23 10:09:49 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Feb 23 10:09:49 crc kubenswrapper[4904]: > Feb 23 10:09:50 crc kubenswrapper[4904]: I0223 10:09:50.259252 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j6l6g" podUID="2d608de8-d727-4542-b6aa-4fedda2eaa3f" containerName="registry-server" probeResult="failure" output=< Feb 23 10:09:50 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Feb 23 10:09:50 crc kubenswrapper[4904]: > Feb 23 10:09:50 crc kubenswrapper[4904]: I0223 10:09:50.275176 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"acf67771-0a1e-4d5d-8003-5d46e639e02a","Type":"ContainerStarted","Data":"5c3032525e5634db7ac53a8c9397e8e45e95176e01b143e6e0820026a33a56ed"} Feb 23 10:09:50 crc kubenswrapper[4904]: I0223 10:09:50.289938 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.28991919 podStartE2EDuration="3.28991919s" podCreationTimestamp="2026-02-23 10:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:09:50.286676648 +0000 UTC m=+223.707050171" watchObservedRunningTime="2026-02-23 10:09:50.28991919 +0000 UTC m=+223.710292703" Feb 23 10:09:53 crc kubenswrapper[4904]: I0223 10:09:53.292655 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cf449" event={"ID":"2ae8aff6-7d01-4352-b974-418342d434b3","Type":"ContainerStarted","Data":"c8377cd1e4be2464b417e4a2a043243041f97b6a2bf6467dc762e1d82db024b9"} Feb 23 10:09:54 crc kubenswrapper[4904]: I0223 10:09:54.299776 4904 generic.go:334] "Generic (PLEG): container finished" podID="2ae8aff6-7d01-4352-b974-418342d434b3" containerID="c8377cd1e4be2464b417e4a2a043243041f97b6a2bf6467dc762e1d82db024b9" exitCode=0 Feb 23 10:09:54 crc kubenswrapper[4904]: I0223 10:09:54.299863 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cf449" event={"ID":"2ae8aff6-7d01-4352-b974-418342d434b3","Type":"ContainerDied","Data":"c8377cd1e4be2464b417e4a2a043243041f97b6a2bf6467dc762e1d82db024b9"} Feb 23 10:09:56 crc kubenswrapper[4904]: I0223 10:09:56.455411 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tltlv" Feb 23 10:09:56 crc kubenswrapper[4904]: I0223 10:09:56.492231 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tltlv" Feb 23 10:09:57 crc kubenswrapper[4904]: I0223 10:09:57.324635 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cf449" event={"ID":"2ae8aff6-7d01-4352-b974-418342d434b3","Type":"ContainerStarted","Data":"77315dedf95435116fc29789d760c3d10059d3a8d3d30d2adff882f9146f0fad"} Feb 23 10:09:57 crc kubenswrapper[4904]: I0223 10:09:57.343429 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cf449" podStartSLOduration=4.26119109 podStartE2EDuration="52.343401504s" podCreationTimestamp="2026-02-23 10:09:05 +0000 UTC" firstStartedPulling="2026-02-23 10:09:08.534217464 +0000 UTC m=+181.954590977" lastFinishedPulling="2026-02-23 10:09:56.616427878 +0000 UTC m=+230.036801391" observedRunningTime="2026-02-23 10:09:57.342817137 +0000 UTC m=+230.763190650" watchObservedRunningTime="2026-02-23 10:09:57.343401504 +0000 UTC m=+230.763775017" Feb 23 10:09:57 crc kubenswrapper[4904]: I0223 10:09:57.454515 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-jnvdm" Feb 23 10:09:57 crc kubenswrapper[4904]: I0223 10:09:57.519456 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tltlv"] Feb 23 10:09:58 crc kubenswrapper[4904]: I0223 10:09:58.327323 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6mlwp" Feb 23 10:09:58 crc kubenswrapper[4904]: I0223 10:09:58.332394 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tltlv" podUID="aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a" containerName="registry-server" containerID="cri-o://6ff121210cf5e40eb1f136a57e15d75f090348411e3765350b3e44b245e2594e" gracePeriod=2 Feb 23 10:09:58 crc kubenswrapper[4904]: I0223 10:09:58.366771 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6mlwp" Feb 23 10:09:58 crc kubenswrapper[4904]: I0223 10:09:58.611595 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-76wlb" Feb 23 10:09:58 crc kubenswrapper[4904]: I0223 10:09:58.665348 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-76wlb" Feb 23 10:09:59 crc kubenswrapper[4904]: I0223 10:09:59.334048 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j6l6g" Feb 23 10:09:59 crc kubenswrapper[4904]: I0223 10:09:59.340338 4904 generic.go:334] "Generic (PLEG): container finished" podID="aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a" containerID="6ff121210cf5e40eb1f136a57e15d75f090348411e3765350b3e44b245e2594e" exitCode=0 Feb 23 10:09:59 crc kubenswrapper[4904]: I0223 10:09:59.340408 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tltlv" event={"ID":"aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a","Type":"ContainerDied","Data":"6ff121210cf5e40eb1f136a57e15d75f090348411e3765350b3e44b245e2594e"} Feb 23 10:09:59 crc kubenswrapper[4904]: I0223 10:09:59.341779 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rl6nq" event={"ID":"202a07c0-2af3-4a11-b7c3-1913a8117e18","Type":"ContainerStarted","Data":"220ff2c7525e0102be4ad3442ff2934d770f862525ace67d9e988238f4e75d35"} Feb 23 10:09:59 crc kubenswrapper[4904]: I0223 10:09:59.346161 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8f9wz" event={"ID":"e82107db-3789-4ed9-8b7a-7dc968cb833f","Type":"ContainerStarted","Data":"cb34f6d4ebc97312880e54405dc5011c66d3a5c91604d0744462652e8c1e3023"} Feb 23 10:09:59 crc kubenswrapper[4904]: I0223 10:09:59.348956 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5sfd" event={"ID":"b9301188-d31f-45c6-a89a-7101ba4af296","Type":"ContainerStarted","Data":"a62e163712a946ca2042b86bd689b0eb7dc11e43e60caabe94bbd2effbaa17b6"} Feb 23 10:09:59 crc kubenswrapper[4904]: I0223 10:09:59.390671 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j6l6g" Feb 23 10:09:59 crc kubenswrapper[4904]: I0223 10:09:59.414066 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tltlv" Feb 23 10:09:59 crc kubenswrapper[4904]: I0223 10:09:59.462665 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a-catalog-content\") pod \"aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a\" (UID: \"aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a\") " Feb 23 10:09:59 crc kubenswrapper[4904]: I0223 10:09:59.462759 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jzkf\" (UniqueName: \"kubernetes.io/projected/aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a-kube-api-access-5jzkf\") pod \"aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a\" (UID: \"aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a\") " Feb 23 10:09:59 crc kubenswrapper[4904]: I0223 10:09:59.462781 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a-utilities\") pod \"aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a\" (UID: \"aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a\") " Feb 23 10:09:59 crc kubenswrapper[4904]: I0223 10:09:59.463790 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a-utilities" (OuterVolumeSpecName: "utilities") pod "aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a" (UID: "aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:09:59 crc kubenswrapper[4904]: I0223 10:09:59.478567 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a-kube-api-access-5jzkf" (OuterVolumeSpecName: "kube-api-access-5jzkf") pod "aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a" (UID: "aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a"). InnerVolumeSpecName "kube-api-access-5jzkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:09:59 crc kubenswrapper[4904]: I0223 10:09:59.522020 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a" (UID: "aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:09:59 crc kubenswrapper[4904]: I0223 10:09:59.565421 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:09:59 crc kubenswrapper[4904]: I0223 10:09:59.565456 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jzkf\" (UniqueName: \"kubernetes.io/projected/aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a-kube-api-access-5jzkf\") on node \"crc\" DevicePath \"\"" Feb 23 10:09:59 crc kubenswrapper[4904]: I0223 10:09:59.565468 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:09:59 crc kubenswrapper[4904]: I0223 10:09:59.894129 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d97bd54d7-9pds6"] Feb 23 10:09:59 crc kubenswrapper[4904]: I0223 10:09:59.894404 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" podUID="1956dc12-a0b1-4439-b13a-3ffc15700f02" containerName="controller-manager" containerID="cri-o://f9a921517c4ab0664803f00545cb8a93b3225f63ff3ed3a8892d3b8ac52d04f4" gracePeriod=30 Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.007421 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl"] Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.007679 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl" podUID="cdd72812-1cf3-4262-9523-0a4e8402cae2" containerName="route-controller-manager" containerID="cri-o://52681c24e625b0d212d2b345146fd578c974cf9865787f5e88ddffe1ed1da5ea" gracePeriod=30 Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.315761 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.355048 4904 generic.go:334] "Generic (PLEG): container finished" podID="e82107db-3789-4ed9-8b7a-7dc968cb833f" containerID="cb34f6d4ebc97312880e54405dc5011c66d3a5c91604d0744462652e8c1e3023" exitCode=0 Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.355144 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8f9wz" event={"ID":"e82107db-3789-4ed9-8b7a-7dc968cb833f","Type":"ContainerDied","Data":"cb34f6d4ebc97312880e54405dc5011c66d3a5c91604d0744462652e8c1e3023"} Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.356946 4904 generic.go:334] "Generic (PLEG): container finished" podID="b9301188-d31f-45c6-a89a-7101ba4af296" containerID="a62e163712a946ca2042b86bd689b0eb7dc11e43e60caabe94bbd2effbaa17b6" exitCode=0 Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.357001 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5sfd" event={"ID":"b9301188-d31f-45c6-a89a-7101ba4af296","Type":"ContainerDied","Data":"a62e163712a946ca2042b86bd689b0eb7dc11e43e60caabe94bbd2effbaa17b6"} Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.365388 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tltlv" event={"ID":"aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a","Type":"ContainerDied","Data":"a94f8f61577b6ebac4a89724e1ba769484c60f3f79d10ca9c7037f177b81eee0"} Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.365503 4904 scope.go:117] "RemoveContainer" containerID="6ff121210cf5e40eb1f136a57e15d75f090348411e3765350b3e44b245e2594e" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.365646 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tltlv" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.376282 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1956dc12-a0b1-4439-b13a-3ffc15700f02-client-ca\") pod \"1956dc12-a0b1-4439-b13a-3ffc15700f02\" (UID: \"1956dc12-a0b1-4439-b13a-3ffc15700f02\") " Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.376348 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1956dc12-a0b1-4439-b13a-3ffc15700f02-config\") pod \"1956dc12-a0b1-4439-b13a-3ffc15700f02\" (UID: \"1956dc12-a0b1-4439-b13a-3ffc15700f02\") " Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.376378 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1956dc12-a0b1-4439-b13a-3ffc15700f02-proxy-ca-bundles\") pod \"1956dc12-a0b1-4439-b13a-3ffc15700f02\" (UID: \"1956dc12-a0b1-4439-b13a-3ffc15700f02\") " Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.376413 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28fdr\" (UniqueName: \"kubernetes.io/projected/1956dc12-a0b1-4439-b13a-3ffc15700f02-kube-api-access-28fdr\") pod \"1956dc12-a0b1-4439-b13a-3ffc15700f02\" (UID: \"1956dc12-a0b1-4439-b13a-3ffc15700f02\") " Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.376485 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1956dc12-a0b1-4439-b13a-3ffc15700f02-serving-cert\") pod \"1956dc12-a0b1-4439-b13a-3ffc15700f02\" (UID: \"1956dc12-a0b1-4439-b13a-3ffc15700f02\") " Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.377464 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1956dc12-a0b1-4439-b13a-3ffc15700f02-config" (OuterVolumeSpecName: "config") pod "1956dc12-a0b1-4439-b13a-3ffc15700f02" (UID: "1956dc12-a0b1-4439-b13a-3ffc15700f02"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.377984 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1956dc12-a0b1-4439-b13a-3ffc15700f02-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1956dc12-a0b1-4439-b13a-3ffc15700f02" (UID: "1956dc12-a0b1-4439-b13a-3ffc15700f02"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.378099 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1956dc12-a0b1-4439-b13a-3ffc15700f02-client-ca" (OuterVolumeSpecName: "client-ca") pod "1956dc12-a0b1-4439-b13a-3ffc15700f02" (UID: "1956dc12-a0b1-4439-b13a-3ffc15700f02"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.386160 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1956dc12-a0b1-4439-b13a-3ffc15700f02-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1956dc12-a0b1-4439-b13a-3ffc15700f02" (UID: "1956dc12-a0b1-4439-b13a-3ffc15700f02"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.386268 4904 generic.go:334] "Generic (PLEG): container finished" podID="cdd72812-1cf3-4262-9523-0a4e8402cae2" containerID="52681c24e625b0d212d2b345146fd578c974cf9865787f5e88ddffe1ed1da5ea" exitCode=0 Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.386334 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl" event={"ID":"cdd72812-1cf3-4262-9523-0a4e8402cae2","Type":"ContainerDied","Data":"52681c24e625b0d212d2b345146fd578c974cf9865787f5e88ddffe1ed1da5ea"} Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.389939 4904 generic.go:334] "Generic (PLEG): container finished" podID="1956dc12-a0b1-4439-b13a-3ffc15700f02" containerID="f9a921517c4ab0664803f00545cb8a93b3225f63ff3ed3a8892d3b8ac52d04f4" exitCode=0 Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.390257 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.390545 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" event={"ID":"1956dc12-a0b1-4439-b13a-3ffc15700f02","Type":"ContainerDied","Data":"f9a921517c4ab0664803f00545cb8a93b3225f63ff3ed3a8892d3b8ac52d04f4"} Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.390596 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d97bd54d7-9pds6" event={"ID":"1956dc12-a0b1-4439-b13a-3ffc15700f02","Type":"ContainerDied","Data":"29eaffe6cc100266accea92dd454e1e55b1522987b2a80faef5fd4c5c224141e"} Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.395873 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1956dc12-a0b1-4439-b13a-3ffc15700f02-kube-api-access-28fdr" (OuterVolumeSpecName: "kube-api-access-28fdr") pod "1956dc12-a0b1-4439-b13a-3ffc15700f02" (UID: "1956dc12-a0b1-4439-b13a-3ffc15700f02"). InnerVolumeSpecName "kube-api-access-28fdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.420952 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tltlv"] Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.422394 4904 scope.go:117] "RemoveContainer" containerID="fce87d8098209527082988def67e7010e3dd53adb61adf94d26b919876d571f2" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.429958 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tltlv"] Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.441879 4904 scope.go:117] "RemoveContainer" containerID="2f0fe54f7bd4da4c315d970f21f4d028ad143bb10e0401f9fa82e0eb392d425e" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.477793 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1956dc12-a0b1-4439-b13a-3ffc15700f02-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.477821 4904 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1956dc12-a0b1-4439-b13a-3ffc15700f02-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.477832 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1956dc12-a0b1-4439-b13a-3ffc15700f02-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.477840 4904 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1956dc12-a0b1-4439-b13a-3ffc15700f02-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.477852 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28fdr\" (UniqueName: \"kubernetes.io/projected/1956dc12-a0b1-4439-b13a-3ffc15700f02-kube-api-access-28fdr\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.561589 4904 scope.go:117] "RemoveContainer" containerID="f9a921517c4ab0664803f00545cb8a93b3225f63ff3ed3a8892d3b8ac52d04f4" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.580728 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.585213 4904 scope.go:117] "RemoveContainer" containerID="f9a921517c4ab0664803f00545cb8a93b3225f63ff3ed3a8892d3b8ac52d04f4" Feb 23 10:10:00 crc kubenswrapper[4904]: E0223 10:10:00.585597 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9a921517c4ab0664803f00545cb8a93b3225f63ff3ed3a8892d3b8ac52d04f4\": container with ID starting with f9a921517c4ab0664803f00545cb8a93b3225f63ff3ed3a8892d3b8ac52d04f4 not found: ID does not exist" containerID="f9a921517c4ab0664803f00545cb8a93b3225f63ff3ed3a8892d3b8ac52d04f4" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.585638 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9a921517c4ab0664803f00545cb8a93b3225f63ff3ed3a8892d3b8ac52d04f4"} err="failed to get container status \"f9a921517c4ab0664803f00545cb8a93b3225f63ff3ed3a8892d3b8ac52d04f4\": rpc error: code = NotFound desc = could not find container \"f9a921517c4ab0664803f00545cb8a93b3225f63ff3ed3a8892d3b8ac52d04f4\": container with ID starting with f9a921517c4ab0664803f00545cb8a93b3225f63ff3ed3a8892d3b8ac52d04f4 not found: ID does not exist" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.680805 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd72812-1cf3-4262-9523-0a4e8402cae2-config\") pod \"cdd72812-1cf3-4262-9523-0a4e8402cae2\" (UID: \"cdd72812-1cf3-4262-9523-0a4e8402cae2\") " Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.680852 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfcjz\" (UniqueName: \"kubernetes.io/projected/cdd72812-1cf3-4262-9523-0a4e8402cae2-kube-api-access-pfcjz\") pod \"cdd72812-1cf3-4262-9523-0a4e8402cae2\" (UID: \"cdd72812-1cf3-4262-9523-0a4e8402cae2\") " Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.680905 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cdd72812-1cf3-4262-9523-0a4e8402cae2-client-ca\") pod \"cdd72812-1cf3-4262-9523-0a4e8402cae2\" (UID: \"cdd72812-1cf3-4262-9523-0a4e8402cae2\") " Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.680931 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdd72812-1cf3-4262-9523-0a4e8402cae2-serving-cert\") pod \"cdd72812-1cf3-4262-9523-0a4e8402cae2\" (UID: \"cdd72812-1cf3-4262-9523-0a4e8402cae2\") " Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.681656 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd72812-1cf3-4262-9523-0a4e8402cae2-client-ca" (OuterVolumeSpecName: "client-ca") pod "cdd72812-1cf3-4262-9523-0a4e8402cae2" (UID: "cdd72812-1cf3-4262-9523-0a4e8402cae2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.681807 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd72812-1cf3-4262-9523-0a4e8402cae2-config" (OuterVolumeSpecName: "config") pod "cdd72812-1cf3-4262-9523-0a4e8402cae2" (UID: "cdd72812-1cf3-4262-9523-0a4e8402cae2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.685033 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdd72812-1cf3-4262-9523-0a4e8402cae2-kube-api-access-pfcjz" (OuterVolumeSpecName: "kube-api-access-pfcjz") pod "cdd72812-1cf3-4262-9523-0a4e8402cae2" (UID: "cdd72812-1cf3-4262-9523-0a4e8402cae2"). InnerVolumeSpecName "kube-api-access-pfcjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.685554 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdd72812-1cf3-4262-9523-0a4e8402cae2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cdd72812-1cf3-4262-9523-0a4e8402cae2" (UID: "cdd72812-1cf3-4262-9523-0a4e8402cae2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.727764 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d97bd54d7-9pds6"] Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.731532 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7d97bd54d7-9pds6"] Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.783895 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd72812-1cf3-4262-9523-0a4e8402cae2-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.783956 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfcjz\" (UniqueName: \"kubernetes.io/projected/cdd72812-1cf3-4262-9523-0a4e8402cae2-kube-api-access-pfcjz\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.783973 4904 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cdd72812-1cf3-4262-9523-0a4e8402cae2-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:00 crc kubenswrapper[4904]: I0223 10:10:00.783988 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdd72812-1cf3-4262-9523-0a4e8402cae2-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.077258 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq"] Feb 23 10:10:01 crc kubenswrapper[4904]: E0223 10:10:01.077908 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a" containerName="extract-utilities" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.077921 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a" containerName="extract-utilities" Feb 23 10:10:01 crc kubenswrapper[4904]: E0223 10:10:01.077931 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd72812-1cf3-4262-9523-0a4e8402cae2" containerName="route-controller-manager" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.077938 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd72812-1cf3-4262-9523-0a4e8402cae2" containerName="route-controller-manager" Feb 23 10:10:01 crc kubenswrapper[4904]: E0223 10:10:01.077957 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a" containerName="extract-content" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.077963 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a" containerName="extract-content" Feb 23 10:10:01 crc kubenswrapper[4904]: E0223 10:10:01.077977 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a" containerName="registry-server" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.077983 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a" containerName="registry-server" Feb 23 10:10:01 crc kubenswrapper[4904]: E0223 10:10:01.077993 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1956dc12-a0b1-4439-b13a-3ffc15700f02" containerName="controller-manager" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.078000 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1956dc12-a0b1-4439-b13a-3ffc15700f02" containerName="controller-manager" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.078147 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1956dc12-a0b1-4439-b13a-3ffc15700f02" containerName="controller-manager" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.078159 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdd72812-1cf3-4262-9523-0a4e8402cae2" containerName="route-controller-manager" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.078167 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a" containerName="registry-server" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.078643 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.081886 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b9cd655cb-b8mzl"] Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.084754 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.088036 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.088439 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.088568 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.088693 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.089248 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.089809 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9efac87a-fe73-4dc0-b05c-4c4868d2f515-config\") pod \"route-controller-manager-68b44f69d7-gxkwq\" (UID: \"9efac87a-fe73-4dc0-b05c-4c4868d2f515\") " pod="openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.089835 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9efac87a-fe73-4dc0-b05c-4c4868d2f515-client-ca\") pod \"route-controller-manager-68b44f69d7-gxkwq\" (UID: \"9efac87a-fe73-4dc0-b05c-4c4868d2f515\") " pod="openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.089858 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5gtl\" (UniqueName: \"kubernetes.io/projected/9efac87a-fe73-4dc0-b05c-4c4868d2f515-kube-api-access-c5gtl\") pod \"route-controller-manager-68b44f69d7-gxkwq\" (UID: \"9efac87a-fe73-4dc0-b05c-4c4868d2f515\") " pod="openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.089897 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9efac87a-fe73-4dc0-b05c-4c4868d2f515-serving-cert\") pod \"route-controller-manager-68b44f69d7-gxkwq\" (UID: \"9efac87a-fe73-4dc0-b05c-4c4868d2f515\") " pod="openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.093434 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq"] Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.095028 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.097227 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.099605 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b9cd655cb-b8mzl"] Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.190427 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-serving-cert\") pod \"controller-manager-b9cd655cb-b8mzl\" (UID: \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\") " pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.190688 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-proxy-ca-bundles\") pod \"controller-manager-b9cd655cb-b8mzl\" (UID: \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\") " pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.190833 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9efac87a-fe73-4dc0-b05c-4c4868d2f515-config\") pod \"route-controller-manager-68b44f69d7-gxkwq\" (UID: \"9efac87a-fe73-4dc0-b05c-4c4868d2f515\") " pod="openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.190927 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9efac87a-fe73-4dc0-b05c-4c4868d2f515-client-ca\") pod \"route-controller-manager-68b44f69d7-gxkwq\" (UID: \"9efac87a-fe73-4dc0-b05c-4c4868d2f515\") " pod="openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.191037 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-config\") pod \"controller-manager-b9cd655cb-b8mzl\" (UID: \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\") " pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.191114 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-client-ca\") pod \"controller-manager-b9cd655cb-b8mzl\" (UID: \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\") " pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.191207 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5gtl\" (UniqueName: \"kubernetes.io/projected/9efac87a-fe73-4dc0-b05c-4c4868d2f515-kube-api-access-c5gtl\") pod \"route-controller-manager-68b44f69d7-gxkwq\" (UID: \"9efac87a-fe73-4dc0-b05c-4c4868d2f515\") " pod="openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.191294 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9efac87a-fe73-4dc0-b05c-4c4868d2f515-serving-cert\") pod \"route-controller-manager-68b44f69d7-gxkwq\" (UID: \"9efac87a-fe73-4dc0-b05c-4c4868d2f515\") " pod="openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.191379 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cccsv\" (UniqueName: \"kubernetes.io/projected/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-kube-api-access-cccsv\") pod \"controller-manager-b9cd655cb-b8mzl\" (UID: \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\") " pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.191764 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9efac87a-fe73-4dc0-b05c-4c4868d2f515-client-ca\") pod \"route-controller-manager-68b44f69d7-gxkwq\" (UID: \"9efac87a-fe73-4dc0-b05c-4c4868d2f515\") " pod="openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.192239 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9efac87a-fe73-4dc0-b05c-4c4868d2f515-config\") pod \"route-controller-manager-68b44f69d7-gxkwq\" (UID: \"9efac87a-fe73-4dc0-b05c-4c4868d2f515\") " pod="openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.195685 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9efac87a-fe73-4dc0-b05c-4c4868d2f515-serving-cert\") pod \"route-controller-manager-68b44f69d7-gxkwq\" (UID: \"9efac87a-fe73-4dc0-b05c-4c4868d2f515\") " pod="openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.207978 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5gtl\" (UniqueName: \"kubernetes.io/projected/9efac87a-fe73-4dc0-b05c-4c4868d2f515-kube-api-access-c5gtl\") pod \"route-controller-manager-68b44f69d7-gxkwq\" (UID: \"9efac87a-fe73-4dc0-b05c-4c4868d2f515\") " pod="openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.262827 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1956dc12-a0b1-4439-b13a-3ffc15700f02" path="/var/lib/kubelet/pods/1956dc12-a0b1-4439-b13a-3ffc15700f02/volumes" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.263654 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a" path="/var/lib/kubelet/pods/aaf9e3f8-fbd9-46e9-8cb6-9137ddfd139a/volumes" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.292915 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-config\") pod \"controller-manager-b9cd655cb-b8mzl\" (UID: \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\") " pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.292954 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-client-ca\") pod \"controller-manager-b9cd655cb-b8mzl\" (UID: \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\") " pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.292996 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cccsv\" (UniqueName: \"kubernetes.io/projected/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-kube-api-access-cccsv\") pod \"controller-manager-b9cd655cb-b8mzl\" (UID: \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\") " pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.293054 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-serving-cert\") pod \"controller-manager-b9cd655cb-b8mzl\" (UID: \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\") " pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.293074 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-proxy-ca-bundles\") pod \"controller-manager-b9cd655cb-b8mzl\" (UID: \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\") " pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.294088 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-client-ca\") pod \"controller-manager-b9cd655cb-b8mzl\" (UID: \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\") " pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.294249 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-proxy-ca-bundles\") pod \"controller-manager-b9cd655cb-b8mzl\" (UID: \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\") " pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.294290 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-config\") pod \"controller-manager-b9cd655cb-b8mzl\" (UID: \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\") " pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.296397 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-serving-cert\") pod \"controller-manager-b9cd655cb-b8mzl\" (UID: \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\") " pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.308933 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cccsv\" (UniqueName: \"kubernetes.io/projected/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-kube-api-access-cccsv\") pod \"controller-manager-b9cd655cb-b8mzl\" (UID: \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\") " pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.395172 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.398605 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl" event={"ID":"cdd72812-1cf3-4262-9523-0a4e8402cae2","Type":"ContainerDied","Data":"7bd8b73fcd3d9d17a116e6ae65551e93c2b6f910772432c4cf2ac9b2a70aadc0"} Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.398759 4904 scope.go:117] "RemoveContainer" containerID="52681c24e625b0d212d2b345146fd578c974cf9865787f5e88ddffe1ed1da5ea" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.398637 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.400388 4904 generic.go:334] "Generic (PLEG): container finished" podID="202a07c0-2af3-4a11-b7c3-1913a8117e18" containerID="220ff2c7525e0102be4ad3442ff2934d770f862525ace67d9e988238f4e75d35" exitCode=0 Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.400468 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rl6nq" event={"ID":"202a07c0-2af3-4a11-b7c3-1913a8117e18","Type":"ContainerDied","Data":"220ff2c7525e0102be4ad3442ff2934d770f862525ace67d9e988238f4e75d35"} Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.405011 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8f9wz" event={"ID":"e82107db-3789-4ed9-8b7a-7dc968cb833f","Type":"ContainerStarted","Data":"b753d60449f0fef8908b66b1f333f64a1fcbad31debb3ca2cc7a5cf8795c6431"} Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.406595 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.408279 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5sfd" event={"ID":"b9301188-d31f-45c6-a89a-7101ba4af296","Type":"ContainerStarted","Data":"2f8090f9bf2af49914326ae65332c698140665864d80d440b3ec1ba7d843f25c"} Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.417433 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl"] Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.431890 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768f9c589f-qlqkl"] Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.456918 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x5sfd" podStartSLOduration=3.259036032 podStartE2EDuration="55.456895744s" podCreationTimestamp="2026-02-23 10:09:06 +0000 UTC" firstStartedPulling="2026-02-23 10:09:08.561193735 +0000 UTC m=+181.981567238" lastFinishedPulling="2026-02-23 10:10:00.759053437 +0000 UTC m=+234.179426950" observedRunningTime="2026-02-23 10:10:01.455011611 +0000 UTC m=+234.875385124" watchObservedRunningTime="2026-02-23 10:10:01.456895744 +0000 UTC m=+234.877269257" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.480600 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8f9wz" podStartSLOduration=3.211580731 podStartE2EDuration="56.480584708s" podCreationTimestamp="2026-02-23 10:09:05 +0000 UTC" firstStartedPulling="2026-02-23 10:09:07.48020224 +0000 UTC m=+180.900575753" lastFinishedPulling="2026-02-23 10:10:00.749206217 +0000 UTC m=+234.169579730" observedRunningTime="2026-02-23 10:10:01.478045246 +0000 UTC m=+234.898418769" watchObservedRunningTime="2026-02-23 10:10:01.480584708 +0000 UTC m=+234.900958221" Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.720094 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-76wlb"] Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.720654 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-76wlb" podUID="a0bfc05c-680d-4e43-96e2-eed1840b26ac" containerName="registry-server" containerID="cri-o://609b61f6c6bf9fe5b49fdcd9f04bbbcbbe73b1f90224fcf1de39493039fdf2c8" gracePeriod=2 Feb 23 10:10:01 crc kubenswrapper[4904]: I0223 10:10:01.780572 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b9cd655cb-b8mzl"] Feb 23 10:10:01 crc kubenswrapper[4904]: W0223 10:10:01.847487 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fdd5b6f_ecf0_48a3_b13a_92c96a9d54c4.slice/crio-3d523d0e02ac69d19be996faf83506b34c3b1b907a1271df95231d4798d2fd30 WatchSource:0}: Error finding container 3d523d0e02ac69d19be996faf83506b34c3b1b907a1271df95231d4798d2fd30: Status 404 returned error can't find the container with id 3d523d0e02ac69d19be996faf83506b34c3b1b907a1271df95231d4798d2fd30 Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.042232 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq"] Feb 23 10:10:02 crc kubenswrapper[4904]: W0223 10:10:02.051082 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9efac87a_fe73_4dc0_b05c_4c4868d2f515.slice/crio-a50067ec4dce85a3b5667a6294b20cf5d81da72612f869e018482301510d646f WatchSource:0}: Error finding container a50067ec4dce85a3b5667a6294b20cf5d81da72612f869e018482301510d646f: Status 404 returned error can't find the container with id a50067ec4dce85a3b5667a6294b20cf5d81da72612f869e018482301510d646f Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.086981 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-76wlb" Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.101411 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0bfc05c-680d-4e43-96e2-eed1840b26ac-catalog-content\") pod \"a0bfc05c-680d-4e43-96e2-eed1840b26ac\" (UID: \"a0bfc05c-680d-4e43-96e2-eed1840b26ac\") " Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.101482 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdgdq\" (UniqueName: \"kubernetes.io/projected/a0bfc05c-680d-4e43-96e2-eed1840b26ac-kube-api-access-zdgdq\") pod \"a0bfc05c-680d-4e43-96e2-eed1840b26ac\" (UID: \"a0bfc05c-680d-4e43-96e2-eed1840b26ac\") " Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.108317 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0bfc05c-680d-4e43-96e2-eed1840b26ac-kube-api-access-zdgdq" (OuterVolumeSpecName: "kube-api-access-zdgdq") pod "a0bfc05c-680d-4e43-96e2-eed1840b26ac" (UID: "a0bfc05c-680d-4e43-96e2-eed1840b26ac"). InnerVolumeSpecName "kube-api-access-zdgdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.140123 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0bfc05c-680d-4e43-96e2-eed1840b26ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0bfc05c-680d-4e43-96e2-eed1840b26ac" (UID: "a0bfc05c-680d-4e43-96e2-eed1840b26ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.202917 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0bfc05c-680d-4e43-96e2-eed1840b26ac-utilities\") pod \"a0bfc05c-680d-4e43-96e2-eed1840b26ac\" (UID: \"a0bfc05c-680d-4e43-96e2-eed1840b26ac\") " Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.204507 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0bfc05c-680d-4e43-96e2-eed1840b26ac-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.204535 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdgdq\" (UniqueName: \"kubernetes.io/projected/a0bfc05c-680d-4e43-96e2-eed1840b26ac-kube-api-access-zdgdq\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.219638 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0bfc05c-680d-4e43-96e2-eed1840b26ac-utilities" (OuterVolumeSpecName: "utilities") pod "a0bfc05c-680d-4e43-96e2-eed1840b26ac" (UID: "a0bfc05c-680d-4e43-96e2-eed1840b26ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.306735 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0bfc05c-680d-4e43-96e2-eed1840b26ac-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.428624 4904 generic.go:334] "Generic (PLEG): container finished" podID="a0bfc05c-680d-4e43-96e2-eed1840b26ac" containerID="609b61f6c6bf9fe5b49fdcd9f04bbbcbbe73b1f90224fcf1de39493039fdf2c8" exitCode=0 Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.428705 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76wlb" event={"ID":"a0bfc05c-680d-4e43-96e2-eed1840b26ac","Type":"ContainerDied","Data":"609b61f6c6bf9fe5b49fdcd9f04bbbcbbe73b1f90224fcf1de39493039fdf2c8"} Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.428733 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-76wlb" Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.428756 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76wlb" event={"ID":"a0bfc05c-680d-4e43-96e2-eed1840b26ac","Type":"ContainerDied","Data":"58942980ead3c188a73e6334dba061b907efc37c4a1f572567784fafce957085"} Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.428774 4904 scope.go:117] "RemoveContainer" containerID="609b61f6c6bf9fe5b49fdcd9f04bbbcbbe73b1f90224fcf1de39493039fdf2c8" Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.435425 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rl6nq" event={"ID":"202a07c0-2af3-4a11-b7c3-1913a8117e18","Type":"ContainerStarted","Data":"8b5d8ccd54d6560267e10f2db2b8136d9923f297de3527af62df7022f8c378db"} Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.454764 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" event={"ID":"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4","Type":"ContainerStarted","Data":"e59f27a1467efcb6ac358dac5ab57e89af96718a71306d1825a8b63d04372b08"} Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.454834 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" event={"ID":"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4","Type":"ContainerStarted","Data":"3d523d0e02ac69d19be996faf83506b34c3b1b907a1271df95231d4798d2fd30"} Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.455042 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.458439 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq" event={"ID":"9efac87a-fe73-4dc0-b05c-4c4868d2f515","Type":"ContainerStarted","Data":"b3625e4dcf92b5ec2dd4116f85d0ebcb0f823023905f6c5fb84c754b30327cc1"} Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.458478 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq" event={"ID":"9efac87a-fe73-4dc0-b05c-4c4868d2f515","Type":"ContainerStarted","Data":"a50067ec4dce85a3b5667a6294b20cf5d81da72612f869e018482301510d646f"} Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.458682 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq" Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.458928 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rl6nq" podStartSLOduration=2.266382532 podStartE2EDuration="53.458910798s" podCreationTimestamp="2026-02-23 10:09:09 +0000 UTC" firstStartedPulling="2026-02-23 10:09:10.684905823 +0000 UTC m=+184.105279336" lastFinishedPulling="2026-02-23 10:10:01.877434089 +0000 UTC m=+235.297807602" observedRunningTime="2026-02-23 10:10:02.458360103 +0000 UTC m=+235.878733616" watchObservedRunningTime="2026-02-23 10:10:02.458910798 +0000 UTC m=+235.879284321" Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.469957 4904 scope.go:117] "RemoveContainer" containerID="054cb4609d91ba2603fd4bac6f9befbd547343cddbe7249d46109799446405b4" Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.492626 4904 scope.go:117] "RemoveContainer" containerID="a8473b15fa9fe09106db974fe4f724f1c9af2603069980697fcc4030af284453" Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.495268 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.509548 4904 scope.go:117] "RemoveContainer" containerID="609b61f6c6bf9fe5b49fdcd9f04bbbcbbe73b1f90224fcf1de39493039fdf2c8" Feb 23 10:10:02 crc kubenswrapper[4904]: E0223 10:10:02.510421 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"609b61f6c6bf9fe5b49fdcd9f04bbbcbbe73b1f90224fcf1de39493039fdf2c8\": container with ID starting with 609b61f6c6bf9fe5b49fdcd9f04bbbcbbe73b1f90224fcf1de39493039fdf2c8 not found: ID does not exist" containerID="609b61f6c6bf9fe5b49fdcd9f04bbbcbbe73b1f90224fcf1de39493039fdf2c8" Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.510469 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"609b61f6c6bf9fe5b49fdcd9f04bbbcbbe73b1f90224fcf1de39493039fdf2c8"} err="failed to get container status \"609b61f6c6bf9fe5b49fdcd9f04bbbcbbe73b1f90224fcf1de39493039fdf2c8\": rpc error: code = NotFound desc = could not find container \"609b61f6c6bf9fe5b49fdcd9f04bbbcbbe73b1f90224fcf1de39493039fdf2c8\": container with ID starting with 609b61f6c6bf9fe5b49fdcd9f04bbbcbbe73b1f90224fcf1de39493039fdf2c8 not found: ID does not exist" Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.510493 4904 scope.go:117] "RemoveContainer" containerID="054cb4609d91ba2603fd4bac6f9befbd547343cddbe7249d46109799446405b4" Feb 23 10:10:02 crc kubenswrapper[4904]: E0223 10:10:02.511129 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"054cb4609d91ba2603fd4bac6f9befbd547343cddbe7249d46109799446405b4\": container with ID starting with 054cb4609d91ba2603fd4bac6f9befbd547343cddbe7249d46109799446405b4 not found: ID does not exist" containerID="054cb4609d91ba2603fd4bac6f9befbd547343cddbe7249d46109799446405b4" Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.511154 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"054cb4609d91ba2603fd4bac6f9befbd547343cddbe7249d46109799446405b4"} err="failed to get container status \"054cb4609d91ba2603fd4bac6f9befbd547343cddbe7249d46109799446405b4\": rpc error: code = NotFound desc = could not find container \"054cb4609d91ba2603fd4bac6f9befbd547343cddbe7249d46109799446405b4\": container with ID starting with 054cb4609d91ba2603fd4bac6f9befbd547343cddbe7249d46109799446405b4 not found: ID does not exist" Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.511169 4904 scope.go:117] "RemoveContainer" containerID="a8473b15fa9fe09106db974fe4f724f1c9af2603069980697fcc4030af284453" Feb 23 10:10:02 crc kubenswrapper[4904]: E0223 10:10:02.511470 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8473b15fa9fe09106db974fe4f724f1c9af2603069980697fcc4030af284453\": container with ID starting with a8473b15fa9fe09106db974fe4f724f1c9af2603069980697fcc4030af284453 not found: ID does not exist" containerID="a8473b15fa9fe09106db974fe4f724f1c9af2603069980697fcc4030af284453" Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.511489 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8473b15fa9fe09106db974fe4f724f1c9af2603069980697fcc4030af284453"} err="failed to get container status \"a8473b15fa9fe09106db974fe4f724f1c9af2603069980697fcc4030af284453\": rpc error: code = NotFound desc = could not find container \"a8473b15fa9fe09106db974fe4f724f1c9af2603069980697fcc4030af284453\": container with ID starting with a8473b15fa9fe09106db974fe4f724f1c9af2603069980697fcc4030af284453 not found: ID does not exist" Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.512402 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" podStartSLOduration=3.512359558 podStartE2EDuration="3.512359558s" podCreationTimestamp="2026-02-23 10:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:10:02.509453425 +0000 UTC m=+235.929826938" watchObservedRunningTime="2026-02-23 10:10:02.512359558 +0000 UTC m=+235.932733081" Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.555325 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-76wlb"] Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.561968 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-76wlb"] Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.590069 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq" podStartSLOduration=2.590045216 podStartE2EDuration="2.590045216s" podCreationTimestamp="2026-02-23 10:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:10:02.58561532 +0000 UTC m=+236.005988833" watchObservedRunningTime="2026-02-23 10:10:02.590045216 +0000 UTC m=+236.010418729" Feb 23 10:10:02 crc kubenswrapper[4904]: I0223 10:10:02.610238 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq" Feb 23 10:10:03 crc kubenswrapper[4904]: I0223 10:10:03.261841 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0bfc05c-680d-4e43-96e2-eed1840b26ac" path="/var/lib/kubelet/pods/a0bfc05c-680d-4e43-96e2-eed1840b26ac/volumes" Feb 23 10:10:03 crc kubenswrapper[4904]: I0223 10:10:03.262525 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdd72812-1cf3-4262-9523-0a4e8402cae2" path="/var/lib/kubelet/pods/cdd72812-1cf3-4262-9523-0a4e8402cae2/volumes" Feb 23 10:10:06 crc kubenswrapper[4904]: I0223 10:10:06.054149 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8f9wz" Feb 23 10:10:06 crc kubenswrapper[4904]: I0223 10:10:06.054413 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8f9wz" Feb 23 10:10:06 crc kubenswrapper[4904]: I0223 10:10:06.100287 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8f9wz" Feb 23 10:10:06 crc kubenswrapper[4904]: I0223 10:10:06.230622 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cf449" Feb 23 10:10:06 crc kubenswrapper[4904]: I0223 10:10:06.231010 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cf449" Feb 23 10:10:06 crc kubenswrapper[4904]: I0223 10:10:06.274454 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cf449" Feb 23 10:10:06 crc kubenswrapper[4904]: I0223 10:10:06.515411 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8f9wz" Feb 23 10:10:06 crc kubenswrapper[4904]: I0223 10:10:06.516998 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cf449" Feb 23 10:10:06 crc kubenswrapper[4904]: I0223 10:10:06.649638 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x5sfd" Feb 23 10:10:06 crc kubenswrapper[4904]: I0223 10:10:06.649684 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x5sfd" Feb 23 10:10:06 crc kubenswrapper[4904]: I0223 10:10:06.691218 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x5sfd" Feb 23 10:10:07 crc kubenswrapper[4904]: I0223 10:10:07.525310 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x5sfd" Feb 23 10:10:08 crc kubenswrapper[4904]: I0223 10:10:08.519502 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x5sfd"] Feb 23 10:10:09 crc kubenswrapper[4904]: I0223 10:10:09.497131 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x5sfd" podUID="b9301188-d31f-45c6-a89a-7101ba4af296" containerName="registry-server" containerID="cri-o://2f8090f9bf2af49914326ae65332c698140665864d80d440b3ec1ba7d843f25c" gracePeriod=2 Feb 23 10:10:09 crc kubenswrapper[4904]: I0223 10:10:09.627182 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rl6nq" Feb 23 10:10:09 crc kubenswrapper[4904]: I0223 10:10:09.627613 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rl6nq" Feb 23 10:10:09 crc kubenswrapper[4904]: I0223 10:10:09.685178 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rl6nq" Feb 23 10:10:09 crc kubenswrapper[4904]: I0223 10:10:09.934091 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x5sfd" Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.116259 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9301188-d31f-45c6-a89a-7101ba4af296-catalog-content\") pod \"b9301188-d31f-45c6-a89a-7101ba4af296\" (UID: \"b9301188-d31f-45c6-a89a-7101ba4af296\") " Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.116305 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9301188-d31f-45c6-a89a-7101ba4af296-utilities\") pod \"b9301188-d31f-45c6-a89a-7101ba4af296\" (UID: \"b9301188-d31f-45c6-a89a-7101ba4af296\") " Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.116347 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsd75\" (UniqueName: \"kubernetes.io/projected/b9301188-d31f-45c6-a89a-7101ba4af296-kube-api-access-fsd75\") pod \"b9301188-d31f-45c6-a89a-7101ba4af296\" (UID: \"b9301188-d31f-45c6-a89a-7101ba4af296\") " Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.117242 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9301188-d31f-45c6-a89a-7101ba4af296-utilities" (OuterVolumeSpecName: "utilities") pod "b9301188-d31f-45c6-a89a-7101ba4af296" (UID: "b9301188-d31f-45c6-a89a-7101ba4af296"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.121176 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9301188-d31f-45c6-a89a-7101ba4af296-kube-api-access-fsd75" (OuterVolumeSpecName: "kube-api-access-fsd75") pod "b9301188-d31f-45c6-a89a-7101ba4af296" (UID: "b9301188-d31f-45c6-a89a-7101ba4af296"). InnerVolumeSpecName "kube-api-access-fsd75". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.190816 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9301188-d31f-45c6-a89a-7101ba4af296-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9301188-d31f-45c6-a89a-7101ba4af296" (UID: "b9301188-d31f-45c6-a89a-7101ba4af296"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.218524 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9301188-d31f-45c6-a89a-7101ba4af296-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.218549 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9301188-d31f-45c6-a89a-7101ba4af296-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.218559 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsd75\" (UniqueName: \"kubernetes.io/projected/b9301188-d31f-45c6-a89a-7101ba4af296-kube-api-access-fsd75\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.506088 4904 generic.go:334] "Generic (PLEG): container finished" podID="b9301188-d31f-45c6-a89a-7101ba4af296" containerID="2f8090f9bf2af49914326ae65332c698140665864d80d440b3ec1ba7d843f25c" exitCode=0 Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.506138 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5sfd" event={"ID":"b9301188-d31f-45c6-a89a-7101ba4af296","Type":"ContainerDied","Data":"2f8090f9bf2af49914326ae65332c698140665864d80d440b3ec1ba7d843f25c"} Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.506198 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5sfd" event={"ID":"b9301188-d31f-45c6-a89a-7101ba4af296","Type":"ContainerDied","Data":"e1d662760a6595937337c02fbbd763dbfe86e45aabc5376f218e2f13ffa02a05"} Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.506220 4904 scope.go:117] "RemoveContainer" containerID="2f8090f9bf2af49914326ae65332c698140665864d80d440b3ec1ba7d843f25c" Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.506227 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x5sfd" Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.528970 4904 scope.go:117] "RemoveContainer" containerID="a62e163712a946ca2042b86bd689b0eb7dc11e43e60caabe94bbd2effbaa17b6" Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.549257 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x5sfd"] Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.550111 4904 scope.go:117] "RemoveContainer" containerID="914689930b6751713f5a56c77170403c31a7fa4be9495db99a4e5cc9fcd8c3d3" Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.553875 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x5sfd"] Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.562236 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rl6nq" Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.567211 4904 scope.go:117] "RemoveContainer" containerID="2f8090f9bf2af49914326ae65332c698140665864d80d440b3ec1ba7d843f25c" Feb 23 10:10:10 crc kubenswrapper[4904]: E0223 10:10:10.567632 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f8090f9bf2af49914326ae65332c698140665864d80d440b3ec1ba7d843f25c\": container with ID starting with 2f8090f9bf2af49914326ae65332c698140665864d80d440b3ec1ba7d843f25c not found: ID does not exist" containerID="2f8090f9bf2af49914326ae65332c698140665864d80d440b3ec1ba7d843f25c" Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.567674 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f8090f9bf2af49914326ae65332c698140665864d80d440b3ec1ba7d843f25c"} err="failed to get container status \"2f8090f9bf2af49914326ae65332c698140665864d80d440b3ec1ba7d843f25c\": rpc error: code = NotFound desc = could not find container \"2f8090f9bf2af49914326ae65332c698140665864d80d440b3ec1ba7d843f25c\": container with ID starting with 2f8090f9bf2af49914326ae65332c698140665864d80d440b3ec1ba7d843f25c not found: ID does not exist" Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.567697 4904 scope.go:117] "RemoveContainer" containerID="a62e163712a946ca2042b86bd689b0eb7dc11e43e60caabe94bbd2effbaa17b6" Feb 23 10:10:10 crc kubenswrapper[4904]: E0223 10:10:10.568062 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a62e163712a946ca2042b86bd689b0eb7dc11e43e60caabe94bbd2effbaa17b6\": container with ID starting with a62e163712a946ca2042b86bd689b0eb7dc11e43e60caabe94bbd2effbaa17b6 not found: ID does not exist" containerID="a62e163712a946ca2042b86bd689b0eb7dc11e43e60caabe94bbd2effbaa17b6" Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.568084 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a62e163712a946ca2042b86bd689b0eb7dc11e43e60caabe94bbd2effbaa17b6"} err="failed to get container status \"a62e163712a946ca2042b86bd689b0eb7dc11e43e60caabe94bbd2effbaa17b6\": rpc error: code = NotFound desc = could not find container \"a62e163712a946ca2042b86bd689b0eb7dc11e43e60caabe94bbd2effbaa17b6\": container with ID starting with a62e163712a946ca2042b86bd689b0eb7dc11e43e60caabe94bbd2effbaa17b6 not found: ID does not exist" Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.568100 4904 scope.go:117] "RemoveContainer" containerID="914689930b6751713f5a56c77170403c31a7fa4be9495db99a4e5cc9fcd8c3d3" Feb 23 10:10:10 crc kubenswrapper[4904]: E0223 10:10:10.568399 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"914689930b6751713f5a56c77170403c31a7fa4be9495db99a4e5cc9fcd8c3d3\": container with ID starting with 914689930b6751713f5a56c77170403c31a7fa4be9495db99a4e5cc9fcd8c3d3 not found: ID does not exist" containerID="914689930b6751713f5a56c77170403c31a7fa4be9495db99a4e5cc9fcd8c3d3" Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.568438 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"914689930b6751713f5a56c77170403c31a7fa4be9495db99a4e5cc9fcd8c3d3"} err="failed to get container status \"914689930b6751713f5a56c77170403c31a7fa4be9495db99a4e5cc9fcd8c3d3\": rpc error: code = NotFound desc = could not find container \"914689930b6751713f5a56c77170403c31a7fa4be9495db99a4e5cc9fcd8c3d3\": container with ID starting with 914689930b6751713f5a56c77170403c31a7fa4be9495db99a4e5cc9fcd8c3d3 not found: ID does not exist" Feb 23 10:10:10 crc kubenswrapper[4904]: I0223 10:10:10.979993 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" podUID="eb0c35a8-1cd9-4584-b2a4-27c6896e9e66" containerName="oauth-openshift" containerID="cri-o://16a98c868c5b4e6d828941e09457aef062c9081fd84cd85a06b3c29fe92320d2" gracePeriod=15 Feb 23 10:10:11 crc kubenswrapper[4904]: I0223 10:10:11.263783 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9301188-d31f-45c6-a89a-7101ba4af296" path="/var/lib/kubelet/pods/b9301188-d31f-45c6-a89a-7101ba4af296/volumes" Feb 23 10:10:11 crc kubenswrapper[4904]: I0223 10:10:11.513358 4904 generic.go:334] "Generic (PLEG): container finished" podID="eb0c35a8-1cd9-4584-b2a4-27c6896e9e66" containerID="16a98c868c5b4e6d828941e09457aef062c9081fd84cd85a06b3c29fe92320d2" exitCode=0 Feb 23 10:10:11 crc kubenswrapper[4904]: I0223 10:10:11.513445 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" event={"ID":"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66","Type":"ContainerDied","Data":"16a98c868c5b4e6d828941e09457aef062c9081fd84cd85a06b3c29fe92320d2"} Feb 23 10:10:11 crc kubenswrapper[4904]: I0223 10:10:11.940514 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.048002 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knl6b\" (UniqueName: \"kubernetes.io/projected/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-kube-api-access-knl6b\") pod \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.048044 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-user-idp-0-file-data\") pod \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.048083 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-router-certs\") pod \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.048106 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-audit-policies\") pod \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.048134 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-cliconfig\") pod \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.048152 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-user-template-login\") pod \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.048662 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66" (UID: "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.048695 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66" (UID: "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.048716 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-session\") pod \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.048808 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-user-template-provider-selection\") pod \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.048839 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-user-template-error\") pod \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.048858 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-trusted-ca-bundle\") pod \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.048879 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-service-ca\") pod \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.048901 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-ocp-branding-template\") pod \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.048934 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-audit-dir\") pod \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.048970 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-serving-cert\") pod \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\" (UID: \"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66\") " Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.049318 4904 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.049329 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.049638 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66" (UID: "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.049933 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66" (UID: "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.049952 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66" (UID: "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.052160 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66" (UID: "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.052629 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66" (UID: "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.056898 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66" (UID: "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.057015 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-kube-api-access-knl6b" (OuterVolumeSpecName: "kube-api-access-knl6b") pod "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66" (UID: "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66"). InnerVolumeSpecName "kube-api-access-knl6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.057181 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66" (UID: "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.057385 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66" (UID: "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.057480 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66" (UID: "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.061114 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66" (UID: "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.061180 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66" (UID: "eb0c35a8-1cd9-4584-b2a4-27c6896e9e66"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.152028 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knl6b\" (UniqueName: \"kubernetes.io/projected/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-kube-api-access-knl6b\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.152065 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.152081 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.152093 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.152104 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.152117 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.152130 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.152141 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.152152 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.152163 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.152174 4904 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.152185 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.519571 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" event={"ID":"eb0c35a8-1cd9-4584-b2a4-27c6896e9e66","Type":"ContainerDied","Data":"d8ac97d6c1c9b484d5c46b6862660421e4f4844c681f856f67067be85f18d512"} Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.519587 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-p5d7h" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.519635 4904 scope.go:117] "RemoveContainer" containerID="16a98c868c5b4e6d828941e09457aef062c9081fd84cd85a06b3c29fe92320d2" Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.543590 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-p5d7h"] Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.547370 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-p5d7h"] Feb 23 10:10:12 crc kubenswrapper[4904]: I0223 10:10:12.917936 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rl6nq"] Feb 23 10:10:13 crc kubenswrapper[4904]: I0223 10:10:13.261856 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb0c35a8-1cd9-4584-b2a4-27c6896e9e66" path="/var/lib/kubelet/pods/eb0c35a8-1cd9-4584-b2a4-27c6896e9e66/volumes" Feb 23 10:10:13 crc kubenswrapper[4904]: I0223 10:10:13.526972 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rl6nq" podUID="202a07c0-2af3-4a11-b7c3-1913a8117e18" containerName="registry-server" containerID="cri-o://8b5d8ccd54d6560267e10f2db2b8136d9923f297de3527af62df7022f8c378db" gracePeriod=2 Feb 23 10:10:13 crc kubenswrapper[4904]: I0223 10:10:13.958798 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rl6nq" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.079823 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/202a07c0-2af3-4a11-b7c3-1913a8117e18-utilities\") pod \"202a07c0-2af3-4a11-b7c3-1913a8117e18\" (UID: \"202a07c0-2af3-4a11-b7c3-1913a8117e18\") " Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.079932 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/202a07c0-2af3-4a11-b7c3-1913a8117e18-catalog-content\") pod \"202a07c0-2af3-4a11-b7c3-1913a8117e18\" (UID: \"202a07c0-2af3-4a11-b7c3-1913a8117e18\") " Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.079974 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktzfv\" (UniqueName: \"kubernetes.io/projected/202a07c0-2af3-4a11-b7c3-1913a8117e18-kube-api-access-ktzfv\") pod \"202a07c0-2af3-4a11-b7c3-1913a8117e18\" (UID: \"202a07c0-2af3-4a11-b7c3-1913a8117e18\") " Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.080322 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-cb86fb758-8zdxh"] Feb 23 10:10:14 crc kubenswrapper[4904]: E0223 10:10:14.080545 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="202a07c0-2af3-4a11-b7c3-1913a8117e18" containerName="registry-server" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.080567 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="202a07c0-2af3-4a11-b7c3-1913a8117e18" containerName="registry-server" Feb 23 10:10:14 crc kubenswrapper[4904]: E0223 10:10:14.080578 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9301188-d31f-45c6-a89a-7101ba4af296" containerName="extract-utilities" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.080586 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9301188-d31f-45c6-a89a-7101ba4af296" containerName="extract-utilities" Feb 23 10:10:14 crc kubenswrapper[4904]: E0223 10:10:14.080594 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0c35a8-1cd9-4584-b2a4-27c6896e9e66" containerName="oauth-openshift" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.080600 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0c35a8-1cd9-4584-b2a4-27c6896e9e66" containerName="oauth-openshift" Feb 23 10:10:14 crc kubenswrapper[4904]: E0223 10:10:14.080610 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="202a07c0-2af3-4a11-b7c3-1913a8117e18" containerName="extract-utilities" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.080616 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="202a07c0-2af3-4a11-b7c3-1913a8117e18" containerName="extract-utilities" Feb 23 10:10:14 crc kubenswrapper[4904]: E0223 10:10:14.080624 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0bfc05c-680d-4e43-96e2-eed1840b26ac" containerName="extract-content" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.080630 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0bfc05c-680d-4e43-96e2-eed1840b26ac" containerName="extract-content" Feb 23 10:10:14 crc kubenswrapper[4904]: E0223 10:10:14.080641 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9301188-d31f-45c6-a89a-7101ba4af296" containerName="extract-content" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.080647 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9301188-d31f-45c6-a89a-7101ba4af296" containerName="extract-content" Feb 23 10:10:14 crc kubenswrapper[4904]: E0223 10:10:14.080657 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9301188-d31f-45c6-a89a-7101ba4af296" containerName="registry-server" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.080662 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9301188-d31f-45c6-a89a-7101ba4af296" containerName="registry-server" Feb 23 10:10:14 crc kubenswrapper[4904]: E0223 10:10:14.080672 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0bfc05c-680d-4e43-96e2-eed1840b26ac" containerName="registry-server" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.080678 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0bfc05c-680d-4e43-96e2-eed1840b26ac" containerName="registry-server" Feb 23 10:10:14 crc kubenswrapper[4904]: E0223 10:10:14.080689 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0bfc05c-680d-4e43-96e2-eed1840b26ac" containerName="extract-utilities" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.080695 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0bfc05c-680d-4e43-96e2-eed1840b26ac" containerName="extract-utilities" Feb 23 10:10:14 crc kubenswrapper[4904]: E0223 10:10:14.080707 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="202a07c0-2af3-4a11-b7c3-1913a8117e18" containerName="extract-content" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.080717 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="202a07c0-2af3-4a11-b7c3-1913a8117e18" containerName="extract-content" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.080776 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/202a07c0-2af3-4a11-b7c3-1913a8117e18-utilities" (OuterVolumeSpecName: "utilities") pod "202a07c0-2af3-4a11-b7c3-1913a8117e18" (UID: "202a07c0-2af3-4a11-b7c3-1913a8117e18"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.080865 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0bfc05c-680d-4e43-96e2-eed1840b26ac" containerName="registry-server" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.080886 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9301188-d31f-45c6-a89a-7101ba4af296" containerName="registry-server" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.080900 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb0c35a8-1cd9-4584-b2a4-27c6896e9e66" containerName="oauth-openshift" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.080908 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="202a07c0-2af3-4a11-b7c3-1913a8117e18" containerName="registry-server" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.081300 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.086325 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.086330 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.086765 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.086787 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.086888 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.086902 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.086999 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.087104 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.087194 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.087281 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.092654 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.093023 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.098135 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-cb86fb758-8zdxh"] Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.099945 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/202a07c0-2af3-4a11-b7c3-1913a8117e18-kube-api-access-ktzfv" (OuterVolumeSpecName: "kube-api-access-ktzfv") pod "202a07c0-2af3-4a11-b7c3-1913a8117e18" (UID: "202a07c0-2af3-4a11-b7c3-1913a8117e18"). InnerVolumeSpecName "kube-api-access-ktzfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.103196 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.109286 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.121871 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.182308 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.182351 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.182388 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-system-router-certs\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.182406 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.182424 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-system-service-ca\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.182449 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-user-template-error\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.182467 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.182490 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-user-template-login\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.182505 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs5gx\" (UniqueName: \"kubernetes.io/projected/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-kube-api-access-xs5gx\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.182523 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-audit-dir\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.182544 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-system-serving-cert\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.182559 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-audit-policies\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.182575 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-system-session\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.182592 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-system-cliconfig\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.182638 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktzfv\" (UniqueName: \"kubernetes.io/projected/202a07c0-2af3-4a11-b7c3-1913a8117e18-kube-api-access-ktzfv\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.182651 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/202a07c0-2af3-4a11-b7c3-1913a8117e18-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.213387 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/202a07c0-2af3-4a11-b7c3-1913a8117e18-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "202a07c0-2af3-4a11-b7c3-1913a8117e18" (UID: "202a07c0-2af3-4a11-b7c3-1913a8117e18"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.283744 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-system-router-certs\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.283793 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.283820 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-system-service-ca\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.283847 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-user-template-error\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.283864 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.283890 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-user-template-login\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.283906 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs5gx\" (UniqueName: \"kubernetes.io/projected/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-kube-api-access-xs5gx\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.283921 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-audit-dir\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.283941 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-audit-policies\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.283958 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-system-serving-cert\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.283979 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-system-session\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.283997 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-system-cliconfig\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.284023 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.284039 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.284079 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/202a07c0-2af3-4a11-b7c3-1913a8117e18-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.284935 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-audit-dir\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.285208 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-system-service-ca\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.285455 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.285635 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-audit-policies\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.285996 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-system-cliconfig\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.288317 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.288487 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-system-session\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.288918 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.289372 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-user-template-error\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.289453 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-system-serving-cert\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.289918 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-system-router-certs\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.290258 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-user-template-login\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.291632 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.300977 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs5gx\" (UniqueName: \"kubernetes.io/projected/2e63dc5b-a1b8-4267-9f0b-fb55b34aed74-kube-api-access-xs5gx\") pod \"oauth-openshift-cb86fb758-8zdxh\" (UID: \"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74\") " pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.429644 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.534592 4904 generic.go:334] "Generic (PLEG): container finished" podID="202a07c0-2af3-4a11-b7c3-1913a8117e18" containerID="8b5d8ccd54d6560267e10f2db2b8136d9923f297de3527af62df7022f8c378db" exitCode=0 Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.534640 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rl6nq" event={"ID":"202a07c0-2af3-4a11-b7c3-1913a8117e18","Type":"ContainerDied","Data":"8b5d8ccd54d6560267e10f2db2b8136d9923f297de3527af62df7022f8c378db"} Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.534674 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rl6nq" event={"ID":"202a07c0-2af3-4a11-b7c3-1913a8117e18","Type":"ContainerDied","Data":"5aa79f570fbc642ef0d6e20565d41241ce327f6e6acc8aa741ef340827f30f80"} Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.534693 4904 scope.go:117] "RemoveContainer" containerID="8b5d8ccd54d6560267e10f2db2b8136d9923f297de3527af62df7022f8c378db" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.534996 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rl6nq" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.551581 4904 scope.go:117] "RemoveContainer" containerID="220ff2c7525e0102be4ad3442ff2934d770f862525ace67d9e988238f4e75d35" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.562983 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rl6nq"] Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.565636 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rl6nq"] Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.588996 4904 scope.go:117] "RemoveContainer" containerID="166c719ec70def24f9b6cc3292dba9b6e84027635138312f7402f996ccb9cba6" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.602616 4904 scope.go:117] "RemoveContainer" containerID="8b5d8ccd54d6560267e10f2db2b8136d9923f297de3527af62df7022f8c378db" Feb 23 10:10:14 crc kubenswrapper[4904]: E0223 10:10:14.607156 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b5d8ccd54d6560267e10f2db2b8136d9923f297de3527af62df7022f8c378db\": container with ID starting with 8b5d8ccd54d6560267e10f2db2b8136d9923f297de3527af62df7022f8c378db not found: ID does not exist" containerID="8b5d8ccd54d6560267e10f2db2b8136d9923f297de3527af62df7022f8c378db" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.607202 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b5d8ccd54d6560267e10f2db2b8136d9923f297de3527af62df7022f8c378db"} err="failed to get container status \"8b5d8ccd54d6560267e10f2db2b8136d9923f297de3527af62df7022f8c378db\": rpc error: code = NotFound desc = could not find container \"8b5d8ccd54d6560267e10f2db2b8136d9923f297de3527af62df7022f8c378db\": container with ID starting with 8b5d8ccd54d6560267e10f2db2b8136d9923f297de3527af62df7022f8c378db not found: ID does not exist" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.607233 4904 scope.go:117] "RemoveContainer" containerID="220ff2c7525e0102be4ad3442ff2934d770f862525ace67d9e988238f4e75d35" Feb 23 10:10:14 crc kubenswrapper[4904]: E0223 10:10:14.607739 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"220ff2c7525e0102be4ad3442ff2934d770f862525ace67d9e988238f4e75d35\": container with ID starting with 220ff2c7525e0102be4ad3442ff2934d770f862525ace67d9e988238f4e75d35 not found: ID does not exist" containerID="220ff2c7525e0102be4ad3442ff2934d770f862525ace67d9e988238f4e75d35" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.607786 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"220ff2c7525e0102be4ad3442ff2934d770f862525ace67d9e988238f4e75d35"} err="failed to get container status \"220ff2c7525e0102be4ad3442ff2934d770f862525ace67d9e988238f4e75d35\": rpc error: code = NotFound desc = could not find container \"220ff2c7525e0102be4ad3442ff2934d770f862525ace67d9e988238f4e75d35\": container with ID starting with 220ff2c7525e0102be4ad3442ff2934d770f862525ace67d9e988238f4e75d35 not found: ID does not exist" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.607809 4904 scope.go:117] "RemoveContainer" containerID="166c719ec70def24f9b6cc3292dba9b6e84027635138312f7402f996ccb9cba6" Feb 23 10:10:14 crc kubenswrapper[4904]: E0223 10:10:14.608137 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"166c719ec70def24f9b6cc3292dba9b6e84027635138312f7402f996ccb9cba6\": container with ID starting with 166c719ec70def24f9b6cc3292dba9b6e84027635138312f7402f996ccb9cba6 not found: ID does not exist" containerID="166c719ec70def24f9b6cc3292dba9b6e84027635138312f7402f996ccb9cba6" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.608162 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"166c719ec70def24f9b6cc3292dba9b6e84027635138312f7402f996ccb9cba6"} err="failed to get container status \"166c719ec70def24f9b6cc3292dba9b6e84027635138312f7402f996ccb9cba6\": rpc error: code = NotFound desc = could not find container \"166c719ec70def24f9b6cc3292dba9b6e84027635138312f7402f996ccb9cba6\": container with ID starting with 166c719ec70def24f9b6cc3292dba9b6e84027635138312f7402f996ccb9cba6 not found: ID does not exist" Feb 23 10:10:14 crc kubenswrapper[4904]: I0223 10:10:14.875325 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-cb86fb758-8zdxh"] Feb 23 10:10:15 crc kubenswrapper[4904]: I0223 10:10:15.261812 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="202a07c0-2af3-4a11-b7c3-1913a8117e18" path="/var/lib/kubelet/pods/202a07c0-2af3-4a11-b7c3-1913a8117e18/volumes" Feb 23 10:10:15 crc kubenswrapper[4904]: I0223 10:10:15.541533 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" event={"ID":"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74","Type":"ContainerStarted","Data":"8142cdca248ce0de773d9949e5abc1e76325e2538b2057f63d66be1c4032c8a6"} Feb 23 10:10:15 crc kubenswrapper[4904]: I0223 10:10:15.541576 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" event={"ID":"2e63dc5b-a1b8-4267-9f0b-fb55b34aed74","Type":"ContainerStarted","Data":"4f0550d6c2d8d86f09b6a2ddf9f1a704b0ed4d0ca5c3fe6777dfc4dbc5e82788"} Feb 23 10:10:15 crc kubenswrapper[4904]: I0223 10:10:15.541834 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:15 crc kubenswrapper[4904]: I0223 10:10:15.558990 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" podStartSLOduration=30.558969124 podStartE2EDuration="30.558969124s" podCreationTimestamp="2026-02-23 10:09:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:10:15.557993096 +0000 UTC m=+248.978366609" watchObservedRunningTime="2026-02-23 10:10:15.558969124 +0000 UTC m=+248.979342627" Feb 23 10:10:15 crc kubenswrapper[4904]: I0223 10:10:15.923285 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-cb86fb758-8zdxh" Feb 23 10:10:17 crc kubenswrapper[4904]: I0223 10:10:17.397840 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:10:17 crc kubenswrapper[4904]: I0223 10:10:17.397916 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:10:19 crc kubenswrapper[4904]: I0223 10:10:19.884646 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b9cd655cb-b8mzl"] Feb 23 10:10:19 crc kubenswrapper[4904]: I0223 10:10:19.885159 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" podUID="5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4" containerName="controller-manager" containerID="cri-o://e59f27a1467efcb6ac358dac5ab57e89af96718a71306d1825a8b63d04372b08" gracePeriod=30 Feb 23 10:10:19 crc kubenswrapper[4904]: I0223 10:10:19.907055 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq"] Feb 23 10:10:19 crc kubenswrapper[4904]: I0223 10:10:19.907328 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq" podUID="9efac87a-fe73-4dc0-b05c-4c4868d2f515" containerName="route-controller-manager" containerID="cri-o://b3625e4dcf92b5ec2dd4116f85d0ebcb0f823023905f6c5fb84c754b30327cc1" gracePeriod=30 Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.278818 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.412451 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.520945 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.564358 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9efac87a-fe73-4dc0-b05c-4c4868d2f515-client-ca\") pod \"9efac87a-fe73-4dc0-b05c-4c4868d2f515\" (UID: \"9efac87a-fe73-4dc0-b05c-4c4868d2f515\") " Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.564425 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9efac87a-fe73-4dc0-b05c-4c4868d2f515-serving-cert\") pod \"9efac87a-fe73-4dc0-b05c-4c4868d2f515\" (UID: \"9efac87a-fe73-4dc0-b05c-4c4868d2f515\") " Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.564513 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9efac87a-fe73-4dc0-b05c-4c4868d2f515-config\") pod \"9efac87a-fe73-4dc0-b05c-4c4868d2f515\" (UID: \"9efac87a-fe73-4dc0-b05c-4c4868d2f515\") " Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.564541 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5gtl\" (UniqueName: \"kubernetes.io/projected/9efac87a-fe73-4dc0-b05c-4c4868d2f515-kube-api-access-c5gtl\") pod \"9efac87a-fe73-4dc0-b05c-4c4868d2f515\" (UID: \"9efac87a-fe73-4dc0-b05c-4c4868d2f515\") " Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.565278 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9efac87a-fe73-4dc0-b05c-4c4868d2f515-client-ca" (OuterVolumeSpecName: "client-ca") pod "9efac87a-fe73-4dc0-b05c-4c4868d2f515" (UID: "9efac87a-fe73-4dc0-b05c-4c4868d2f515"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.565441 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9efac87a-fe73-4dc0-b05c-4c4868d2f515-config" (OuterVolumeSpecName: "config") pod "9efac87a-fe73-4dc0-b05c-4c4868d2f515" (UID: "9efac87a-fe73-4dc0-b05c-4c4868d2f515"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.573386 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9efac87a-fe73-4dc0-b05c-4c4868d2f515-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9efac87a-fe73-4dc0-b05c-4c4868d2f515" (UID: "9efac87a-fe73-4dc0-b05c-4c4868d2f515"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.573733 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq" event={"ID":"9efac87a-fe73-4dc0-b05c-4c4868d2f515","Type":"ContainerDied","Data":"b3625e4dcf92b5ec2dd4116f85d0ebcb0f823023905f6c5fb84c754b30327cc1"} Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.573783 4904 scope.go:117] "RemoveContainer" containerID="b3625e4dcf92b5ec2dd4116f85d0ebcb0f823023905f6c5fb84c754b30327cc1" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.573812 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.573866 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9efac87a-fe73-4dc0-b05c-4c4868d2f515-kube-api-access-c5gtl" (OuterVolumeSpecName: "kube-api-access-c5gtl") pod "9efac87a-fe73-4dc0-b05c-4c4868d2f515" (UID: "9efac87a-fe73-4dc0-b05c-4c4868d2f515"). InnerVolumeSpecName "kube-api-access-c5gtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.573707 4904 generic.go:334] "Generic (PLEG): container finished" podID="9efac87a-fe73-4dc0-b05c-4c4868d2f515" containerID="b3625e4dcf92b5ec2dd4116f85d0ebcb0f823023905f6c5fb84c754b30327cc1" exitCode=0 Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.574095 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq" event={"ID":"9efac87a-fe73-4dc0-b05c-4c4868d2f515","Type":"ContainerDied","Data":"a50067ec4dce85a3b5667a6294b20cf5d81da72612f869e018482301510d646f"} Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.576159 4904 generic.go:334] "Generic (PLEG): container finished" podID="5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4" containerID="e59f27a1467efcb6ac358dac5ab57e89af96718a71306d1825a8b63d04372b08" exitCode=0 Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.576297 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" event={"ID":"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4","Type":"ContainerDied","Data":"e59f27a1467efcb6ac358dac5ab57e89af96718a71306d1825a8b63d04372b08"} Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.576395 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" event={"ID":"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4","Type":"ContainerDied","Data":"3d523d0e02ac69d19be996faf83506b34c3b1b907a1271df95231d4798d2fd30"} Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.576425 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b9cd655cb-b8mzl" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.590550 4904 scope.go:117] "RemoveContainer" containerID="b3625e4dcf92b5ec2dd4116f85d0ebcb0f823023905f6c5fb84c754b30327cc1" Feb 23 10:10:20 crc kubenswrapper[4904]: E0223 10:10:20.591147 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3625e4dcf92b5ec2dd4116f85d0ebcb0f823023905f6c5fb84c754b30327cc1\": container with ID starting with b3625e4dcf92b5ec2dd4116f85d0ebcb0f823023905f6c5fb84c754b30327cc1 not found: ID does not exist" containerID="b3625e4dcf92b5ec2dd4116f85d0ebcb0f823023905f6c5fb84c754b30327cc1" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.591240 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3625e4dcf92b5ec2dd4116f85d0ebcb0f823023905f6c5fb84c754b30327cc1"} err="failed to get container status \"b3625e4dcf92b5ec2dd4116f85d0ebcb0f823023905f6c5fb84c754b30327cc1\": rpc error: code = NotFound desc = could not find container \"b3625e4dcf92b5ec2dd4116f85d0ebcb0f823023905f6c5fb84c754b30327cc1\": container with ID starting with b3625e4dcf92b5ec2dd4116f85d0ebcb0f823023905f6c5fb84c754b30327cc1 not found: ID does not exist" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.591311 4904 scope.go:117] "RemoveContainer" containerID="e59f27a1467efcb6ac358dac5ab57e89af96718a71306d1825a8b63d04372b08" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.601249 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq"] Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.603646 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68b44f69d7-gxkwq"] Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.611689 4904 scope.go:117] "RemoveContainer" containerID="e59f27a1467efcb6ac358dac5ab57e89af96718a71306d1825a8b63d04372b08" Feb 23 10:10:20 crc kubenswrapper[4904]: E0223 10:10:20.612045 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e59f27a1467efcb6ac358dac5ab57e89af96718a71306d1825a8b63d04372b08\": container with ID starting with e59f27a1467efcb6ac358dac5ab57e89af96718a71306d1825a8b63d04372b08 not found: ID does not exist" containerID="e59f27a1467efcb6ac358dac5ab57e89af96718a71306d1825a8b63d04372b08" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.612087 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e59f27a1467efcb6ac358dac5ab57e89af96718a71306d1825a8b63d04372b08"} err="failed to get container status \"e59f27a1467efcb6ac358dac5ab57e89af96718a71306d1825a8b63d04372b08\": rpc error: code = NotFound desc = could not find container \"e59f27a1467efcb6ac358dac5ab57e89af96718a71306d1825a8b63d04372b08\": container with ID starting with e59f27a1467efcb6ac358dac5ab57e89af96718a71306d1825a8b63d04372b08 not found: ID does not exist" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.665877 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-client-ca\") pod \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\" (UID: \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\") " Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.666015 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-serving-cert\") pod \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\" (UID: \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\") " Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.666086 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-config\") pod \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\" (UID: \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\") " Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.666202 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-proxy-ca-bundles\") pod \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\" (UID: \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\") " Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.666258 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cccsv\" (UniqueName: \"kubernetes.io/projected/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-kube-api-access-cccsv\") pod \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\" (UID: \"5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4\") " Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.666538 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9efac87a-fe73-4dc0-b05c-4c4868d2f515-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.666587 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5gtl\" (UniqueName: \"kubernetes.io/projected/9efac87a-fe73-4dc0-b05c-4c4868d2f515-kube-api-access-c5gtl\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.666605 4904 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9efac87a-fe73-4dc0-b05c-4c4868d2f515-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.666616 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9efac87a-fe73-4dc0-b05c-4c4868d2f515-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.667229 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-config" (OuterVolumeSpecName: "config") pod "5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4" (UID: "5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.667543 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4" (UID: "5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.667620 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-client-ca" (OuterVolumeSpecName: "client-ca") pod "5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4" (UID: "5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.669771 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-kube-api-access-cccsv" (OuterVolumeSpecName: "kube-api-access-cccsv") pod "5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4" (UID: "5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4"). InnerVolumeSpecName "kube-api-access-cccsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.670574 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4" (UID: "5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.768079 4904 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.768143 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cccsv\" (UniqueName: \"kubernetes.io/projected/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-kube-api-access-cccsv\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.768165 4904 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.768182 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.768200 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.901895 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b9cd655cb-b8mzl"] Feb 23 10:10:20 crc kubenswrapper[4904]: I0223 10:10:20.904276 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-b9cd655cb-b8mzl"] Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.089413 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b99b94dc7-9n88j"] Feb 23 10:10:21 crc kubenswrapper[4904]: E0223 10:10:21.089992 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9efac87a-fe73-4dc0-b05c-4c4868d2f515" containerName="route-controller-manager" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.090102 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9efac87a-fe73-4dc0-b05c-4c4868d2f515" containerName="route-controller-manager" Feb 23 10:10:21 crc kubenswrapper[4904]: E0223 10:10:21.090209 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4" containerName="controller-manager" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.090280 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4" containerName="controller-manager" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.090451 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4" containerName="controller-manager" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.090524 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="9efac87a-fe73-4dc0-b05c-4c4868d2f515" containerName="route-controller-manager" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.091034 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b99b94dc7-9n88j" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.091859 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b6c5d6995-jlbbw"] Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.092786 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b6c5d6995-jlbbw" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.094209 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.094537 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.094663 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.094893 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.095580 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.095582 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.096873 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.096944 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.096969 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.097176 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.097289 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.097700 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.103175 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b99b94dc7-9n88j"] Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.110694 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b6c5d6995-jlbbw"] Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.110791 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.262747 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4" path="/var/lib/kubelet/pods/5fdd5b6f-ecf0-48a3-b13a-92c96a9d54c4/volumes" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.263808 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9efac87a-fe73-4dc0-b05c-4c4868d2f515" path="/var/lib/kubelet/pods/9efac87a-fe73-4dc0-b05c-4c4868d2f515/volumes" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.273881 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrhss\" (UniqueName: \"kubernetes.io/projected/ffbda345-9a69-4c66-a08d-06ba0e59eb64-kube-api-access-xrhss\") pod \"route-controller-manager-5b6c5d6995-jlbbw\" (UID: \"ffbda345-9a69-4c66-a08d-06ba0e59eb64\") " pod="openshift-route-controller-manager/route-controller-manager-5b6c5d6995-jlbbw" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.273930 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c07a4eac-1e9f-4588-90cb-9e30af1a2a17-config\") pod \"controller-manager-b99b94dc7-9n88j\" (UID: \"c07a4eac-1e9f-4588-90cb-9e30af1a2a17\") " pod="openshift-controller-manager/controller-manager-b99b94dc7-9n88j" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.273968 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c07a4eac-1e9f-4588-90cb-9e30af1a2a17-serving-cert\") pod \"controller-manager-b99b94dc7-9n88j\" (UID: \"c07a4eac-1e9f-4588-90cb-9e30af1a2a17\") " pod="openshift-controller-manager/controller-manager-b99b94dc7-9n88j" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.274049 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c07a4eac-1e9f-4588-90cb-9e30af1a2a17-client-ca\") pod \"controller-manager-b99b94dc7-9n88j\" (UID: \"c07a4eac-1e9f-4588-90cb-9e30af1a2a17\") " pod="openshift-controller-manager/controller-manager-b99b94dc7-9n88j" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.274193 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c07a4eac-1e9f-4588-90cb-9e30af1a2a17-proxy-ca-bundles\") pod \"controller-manager-b99b94dc7-9n88j\" (UID: \"c07a4eac-1e9f-4588-90cb-9e30af1a2a17\") " pod="openshift-controller-manager/controller-manager-b99b94dc7-9n88j" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.274238 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f78dg\" (UniqueName: \"kubernetes.io/projected/c07a4eac-1e9f-4588-90cb-9e30af1a2a17-kube-api-access-f78dg\") pod \"controller-manager-b99b94dc7-9n88j\" (UID: \"c07a4eac-1e9f-4588-90cb-9e30af1a2a17\") " pod="openshift-controller-manager/controller-manager-b99b94dc7-9n88j" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.274319 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffbda345-9a69-4c66-a08d-06ba0e59eb64-serving-cert\") pod \"route-controller-manager-5b6c5d6995-jlbbw\" (UID: \"ffbda345-9a69-4c66-a08d-06ba0e59eb64\") " pod="openshift-route-controller-manager/route-controller-manager-5b6c5d6995-jlbbw" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.274347 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffbda345-9a69-4c66-a08d-06ba0e59eb64-client-ca\") pod \"route-controller-manager-5b6c5d6995-jlbbw\" (UID: \"ffbda345-9a69-4c66-a08d-06ba0e59eb64\") " pod="openshift-route-controller-manager/route-controller-manager-5b6c5d6995-jlbbw" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.274461 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffbda345-9a69-4c66-a08d-06ba0e59eb64-config\") pod \"route-controller-manager-5b6c5d6995-jlbbw\" (UID: \"ffbda345-9a69-4c66-a08d-06ba0e59eb64\") " pod="openshift-route-controller-manager/route-controller-manager-5b6c5d6995-jlbbw" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.375610 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffbda345-9a69-4c66-a08d-06ba0e59eb64-config\") pod \"route-controller-manager-5b6c5d6995-jlbbw\" (UID: \"ffbda345-9a69-4c66-a08d-06ba0e59eb64\") " pod="openshift-route-controller-manager/route-controller-manager-5b6c5d6995-jlbbw" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.375702 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrhss\" (UniqueName: \"kubernetes.io/projected/ffbda345-9a69-4c66-a08d-06ba0e59eb64-kube-api-access-xrhss\") pod \"route-controller-manager-5b6c5d6995-jlbbw\" (UID: \"ffbda345-9a69-4c66-a08d-06ba0e59eb64\") " pod="openshift-route-controller-manager/route-controller-manager-5b6c5d6995-jlbbw" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.375750 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c07a4eac-1e9f-4588-90cb-9e30af1a2a17-config\") pod \"controller-manager-b99b94dc7-9n88j\" (UID: \"c07a4eac-1e9f-4588-90cb-9e30af1a2a17\") " pod="openshift-controller-manager/controller-manager-b99b94dc7-9n88j" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.375781 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c07a4eac-1e9f-4588-90cb-9e30af1a2a17-serving-cert\") pod \"controller-manager-b99b94dc7-9n88j\" (UID: \"c07a4eac-1e9f-4588-90cb-9e30af1a2a17\") " pod="openshift-controller-manager/controller-manager-b99b94dc7-9n88j" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.375801 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c07a4eac-1e9f-4588-90cb-9e30af1a2a17-client-ca\") pod \"controller-manager-b99b94dc7-9n88j\" (UID: \"c07a4eac-1e9f-4588-90cb-9e30af1a2a17\") " pod="openshift-controller-manager/controller-manager-b99b94dc7-9n88j" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.375826 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c07a4eac-1e9f-4588-90cb-9e30af1a2a17-proxy-ca-bundles\") pod \"controller-manager-b99b94dc7-9n88j\" (UID: \"c07a4eac-1e9f-4588-90cb-9e30af1a2a17\") " pod="openshift-controller-manager/controller-manager-b99b94dc7-9n88j" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.375849 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f78dg\" (UniqueName: \"kubernetes.io/projected/c07a4eac-1e9f-4588-90cb-9e30af1a2a17-kube-api-access-f78dg\") pod \"controller-manager-b99b94dc7-9n88j\" (UID: \"c07a4eac-1e9f-4588-90cb-9e30af1a2a17\") " pod="openshift-controller-manager/controller-manager-b99b94dc7-9n88j" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.375892 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffbda345-9a69-4c66-a08d-06ba0e59eb64-serving-cert\") pod \"route-controller-manager-5b6c5d6995-jlbbw\" (UID: \"ffbda345-9a69-4c66-a08d-06ba0e59eb64\") " pod="openshift-route-controller-manager/route-controller-manager-5b6c5d6995-jlbbw" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.375910 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffbda345-9a69-4c66-a08d-06ba0e59eb64-client-ca\") pod \"route-controller-manager-5b6c5d6995-jlbbw\" (UID: \"ffbda345-9a69-4c66-a08d-06ba0e59eb64\") " pod="openshift-route-controller-manager/route-controller-manager-5b6c5d6995-jlbbw" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.376792 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffbda345-9a69-4c66-a08d-06ba0e59eb64-client-ca\") pod \"route-controller-manager-5b6c5d6995-jlbbw\" (UID: \"ffbda345-9a69-4c66-a08d-06ba0e59eb64\") " pod="openshift-route-controller-manager/route-controller-manager-5b6c5d6995-jlbbw" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.376946 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffbda345-9a69-4c66-a08d-06ba0e59eb64-config\") pod \"route-controller-manager-5b6c5d6995-jlbbw\" (UID: \"ffbda345-9a69-4c66-a08d-06ba0e59eb64\") " pod="openshift-route-controller-manager/route-controller-manager-5b6c5d6995-jlbbw" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.378009 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c07a4eac-1e9f-4588-90cb-9e30af1a2a17-client-ca\") pod \"controller-manager-b99b94dc7-9n88j\" (UID: \"c07a4eac-1e9f-4588-90cb-9e30af1a2a17\") " pod="openshift-controller-manager/controller-manager-b99b94dc7-9n88j" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.378070 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c07a4eac-1e9f-4588-90cb-9e30af1a2a17-proxy-ca-bundles\") pod \"controller-manager-b99b94dc7-9n88j\" (UID: \"c07a4eac-1e9f-4588-90cb-9e30af1a2a17\") " pod="openshift-controller-manager/controller-manager-b99b94dc7-9n88j" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.378383 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c07a4eac-1e9f-4588-90cb-9e30af1a2a17-config\") pod \"controller-manager-b99b94dc7-9n88j\" (UID: \"c07a4eac-1e9f-4588-90cb-9e30af1a2a17\") " pod="openshift-controller-manager/controller-manager-b99b94dc7-9n88j" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.393086 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffbda345-9a69-4c66-a08d-06ba0e59eb64-serving-cert\") pod \"route-controller-manager-5b6c5d6995-jlbbw\" (UID: \"ffbda345-9a69-4c66-a08d-06ba0e59eb64\") " pod="openshift-route-controller-manager/route-controller-manager-5b6c5d6995-jlbbw" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.393179 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c07a4eac-1e9f-4588-90cb-9e30af1a2a17-serving-cert\") pod \"controller-manager-b99b94dc7-9n88j\" (UID: \"c07a4eac-1e9f-4588-90cb-9e30af1a2a17\") " pod="openshift-controller-manager/controller-manager-b99b94dc7-9n88j" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.397423 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrhss\" (UniqueName: \"kubernetes.io/projected/ffbda345-9a69-4c66-a08d-06ba0e59eb64-kube-api-access-xrhss\") pod \"route-controller-manager-5b6c5d6995-jlbbw\" (UID: \"ffbda345-9a69-4c66-a08d-06ba0e59eb64\") " pod="openshift-route-controller-manager/route-controller-manager-5b6c5d6995-jlbbw" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.419503 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f78dg\" (UniqueName: \"kubernetes.io/projected/c07a4eac-1e9f-4588-90cb-9e30af1a2a17-kube-api-access-f78dg\") pod \"controller-manager-b99b94dc7-9n88j\" (UID: \"c07a4eac-1e9f-4588-90cb-9e30af1a2a17\") " pod="openshift-controller-manager/controller-manager-b99b94dc7-9n88j" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.420106 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b6c5d6995-jlbbw" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.712985 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b99b94dc7-9n88j" Feb 23 10:10:21 crc kubenswrapper[4904]: I0223 10:10:21.863020 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b6c5d6995-jlbbw"] Feb 23 10:10:21 crc kubenswrapper[4904]: W0223 10:10:21.873924 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffbda345_9a69_4c66_a08d_06ba0e59eb64.slice/crio-5969343dd6b97f49f5deb158e47463442d246026974059c8f06fad08e102644e WatchSource:0}: Error finding container 5969343dd6b97f49f5deb158e47463442d246026974059c8f06fad08e102644e: Status 404 returned error can't find the container with id 5969343dd6b97f49f5deb158e47463442d246026974059c8f06fad08e102644e Feb 23 10:10:22 crc kubenswrapper[4904]: I0223 10:10:22.145248 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b99b94dc7-9n88j"] Feb 23 10:10:22 crc kubenswrapper[4904]: W0223 10:10:22.149164 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc07a4eac_1e9f_4588_90cb_9e30af1a2a17.slice/crio-85dee148b3ff9d94c484237018b9568cbb030b1b8e0c9edb380a345cf4cb70aa WatchSource:0}: Error finding container 85dee148b3ff9d94c484237018b9568cbb030b1b8e0c9edb380a345cf4cb70aa: Status 404 returned error can't find the container with id 85dee148b3ff9d94c484237018b9568cbb030b1b8e0c9edb380a345cf4cb70aa Feb 23 10:10:22 crc kubenswrapper[4904]: I0223 10:10:22.589144 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b99b94dc7-9n88j" event={"ID":"c07a4eac-1e9f-4588-90cb-9e30af1a2a17","Type":"ContainerStarted","Data":"f45a351b495b752dedca519c4859503a37994c51f35013a67d8726a6ad786b59"} Feb 23 10:10:22 crc kubenswrapper[4904]: I0223 10:10:22.589448 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b99b94dc7-9n88j" event={"ID":"c07a4eac-1e9f-4588-90cb-9e30af1a2a17","Type":"ContainerStarted","Data":"85dee148b3ff9d94c484237018b9568cbb030b1b8e0c9edb380a345cf4cb70aa"} Feb 23 10:10:22 crc kubenswrapper[4904]: I0223 10:10:22.590495 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b99b94dc7-9n88j" Feb 23 10:10:22 crc kubenswrapper[4904]: I0223 10:10:22.591894 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b6c5d6995-jlbbw" event={"ID":"ffbda345-9a69-4c66-a08d-06ba0e59eb64","Type":"ContainerStarted","Data":"160b010dbb3e8e180eb898c206252703cd9b78512eee3c2f0d1b1dcdb5bc1fc4"} Feb 23 10:10:22 crc kubenswrapper[4904]: I0223 10:10:22.591918 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b6c5d6995-jlbbw" event={"ID":"ffbda345-9a69-4c66-a08d-06ba0e59eb64","Type":"ContainerStarted","Data":"5969343dd6b97f49f5deb158e47463442d246026974059c8f06fad08e102644e"} Feb 23 10:10:22 crc kubenswrapper[4904]: I0223 10:10:22.592342 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b6c5d6995-jlbbw" Feb 23 10:10:22 crc kubenswrapper[4904]: I0223 10:10:22.595473 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b99b94dc7-9n88j" Feb 23 10:10:22 crc kubenswrapper[4904]: I0223 10:10:22.597066 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b6c5d6995-jlbbw" Feb 23 10:10:22 crc kubenswrapper[4904]: I0223 10:10:22.636143 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b99b94dc7-9n88j" podStartSLOduration=3.636122202 podStartE2EDuration="3.636122202s" podCreationTimestamp="2026-02-23 10:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:10:22.617009868 +0000 UTC m=+256.037383381" watchObservedRunningTime="2026-02-23 10:10:22.636122202 +0000 UTC m=+256.056495715" Feb 23 10:10:22 crc kubenswrapper[4904]: I0223 10:10:22.670207 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b6c5d6995-jlbbw" podStartSLOduration=3.67018646 podStartE2EDuration="3.67018646s" podCreationTimestamp="2026-02-23 10:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:10:22.638589002 +0000 UTC m=+256.058962515" watchObservedRunningTime="2026-02-23 10:10:22.67018646 +0000 UTC m=+256.090559983" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.230028 4904 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.231456 4904 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.231805 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0" gracePeriod=15 Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.231990 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.232502 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f" gracePeriod=15 Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.232549 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2" gracePeriod=15 Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.232581 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c" gracePeriod=15 Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.232604 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da" gracePeriod=15 Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.233210 4904 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 10:10:27 crc kubenswrapper[4904]: E0223 10:10:27.233416 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.233433 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 23 10:10:27 crc kubenswrapper[4904]: E0223 10:10:27.233448 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.233457 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 23 10:10:27 crc kubenswrapper[4904]: E0223 10:10:27.233468 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.233477 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 10:10:27 crc kubenswrapper[4904]: E0223 10:10:27.233490 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.233499 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 10:10:27 crc kubenswrapper[4904]: E0223 10:10:27.233512 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.233521 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 10:10:27 crc kubenswrapper[4904]: E0223 10:10:27.233536 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.233549 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 10:10:27 crc kubenswrapper[4904]: E0223 10:10:27.233558 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.233568 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 23 10:10:27 crc kubenswrapper[4904]: E0223 10:10:27.233582 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.233592 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 10:10:27 crc kubenswrapper[4904]: E0223 10:10:27.233605 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.233614 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.233780 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.233794 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.233807 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.233817 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.233831 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.233842 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.233853 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 23 10:10:27 crc kubenswrapper[4904]: E0223 10:10:27.234067 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.234101 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.234239 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.234258 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.267110 4904 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.279975 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.357979 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.358491 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.358569 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.358602 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.358667 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.358886 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.358979 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.359211 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.460508 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.460566 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.460622 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.460665 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.460676 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.460739 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.460779 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.460677 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.460757 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.460692 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.460902 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.461002 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.461038 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.461128 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.461143 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.461192 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.583307 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 10:10:27 crc kubenswrapper[4904]: E0223 10:10:27.611017 4904 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.138:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1896d8640b06c138 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 10:10:27.610190136 +0000 UTC m=+261.030563649,LastTimestamp:2026-02-23 10:10:27.610190136 +0000 UTC m=+261.030563649,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.620088 4904 generic.go:334] "Generic (PLEG): container finished" podID="acf67771-0a1e-4d5d-8003-5d46e639e02a" containerID="5c3032525e5634db7ac53a8c9397e8e45e95176e01b143e6e0820026a33a56ed" exitCode=0 Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.620172 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"acf67771-0a1e-4d5d-8003-5d46e639e02a","Type":"ContainerDied","Data":"5c3032525e5634db7ac53a8c9397e8e45e95176e01b143e6e0820026a33a56ed"} Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.620762 4904 status_manager.go:851] "Failed to get status for pod" podUID="acf67771-0a1e-4d5d-8003-5d46e639e02a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.621039 4904 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.621579 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c598bc3676bf9b4b05ce1834415333bc00a2057dd2a1094450d47fd4ed828a39"} Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.627515 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.628759 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.630796 4904 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f" exitCode=0 Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.630814 4904 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c" exitCode=0 Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.630824 4904 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da" exitCode=0 Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.630833 4904 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2" exitCode=2 Feb 23 10:10:27 crc kubenswrapper[4904]: I0223 10:10:27.630907 4904 scope.go:117] "RemoveContainer" containerID="b17644313ad1d8823812ff784b93a3085621c301c593e363bf76ea42a5931f4f" Feb 23 10:10:28 crc kubenswrapper[4904]: I0223 10:10:28.640329 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 10:10:28 crc kubenswrapper[4904]: I0223 10:10:28.644968 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ee4a6e8c7a56b008d06625053546bc99eed1f964fb19b05ae1f4707cdba90968"} Feb 23 10:10:28 crc kubenswrapper[4904]: I0223 10:10:28.645071 4904 status_manager.go:851] "Failed to get status for pod" podUID="acf67771-0a1e-4d5d-8003-5d46e639e02a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:28 crc kubenswrapper[4904]: I0223 10:10:28.645287 4904 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:28 crc kubenswrapper[4904]: I0223 10:10:28.978350 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 23 10:10:28 crc kubenswrapper[4904]: I0223 10:10:28.979153 4904 status_manager.go:851] "Failed to get status for pod" podUID="acf67771-0a1e-4d5d-8003-5d46e639e02a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:28 crc kubenswrapper[4904]: I0223 10:10:28.979742 4904 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.084192 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/acf67771-0a1e-4d5d-8003-5d46e639e02a-kubelet-dir\") pod \"acf67771-0a1e-4d5d-8003-5d46e639e02a\" (UID: \"acf67771-0a1e-4d5d-8003-5d46e639e02a\") " Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.084316 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acf67771-0a1e-4d5d-8003-5d46e639e02a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "acf67771-0a1e-4d5d-8003-5d46e639e02a" (UID: "acf67771-0a1e-4d5d-8003-5d46e639e02a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.084399 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/acf67771-0a1e-4d5d-8003-5d46e639e02a-kube-api-access\") pod \"acf67771-0a1e-4d5d-8003-5d46e639e02a\" (UID: \"acf67771-0a1e-4d5d-8003-5d46e639e02a\") " Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.084440 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/acf67771-0a1e-4d5d-8003-5d46e639e02a-var-lock\") pod \"acf67771-0a1e-4d5d-8003-5d46e639e02a\" (UID: \"acf67771-0a1e-4d5d-8003-5d46e639e02a\") " Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.084564 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acf67771-0a1e-4d5d-8003-5d46e639e02a-var-lock" (OuterVolumeSpecName: "var-lock") pod "acf67771-0a1e-4d5d-8003-5d46e639e02a" (UID: "acf67771-0a1e-4d5d-8003-5d46e639e02a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.084765 4904 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/acf67771-0a1e-4d5d-8003-5d46e639e02a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.084782 4904 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/acf67771-0a1e-4d5d-8003-5d46e639e02a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.091929 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acf67771-0a1e-4d5d-8003-5d46e639e02a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "acf67771-0a1e-4d5d-8003-5d46e639e02a" (UID: "acf67771-0a1e-4d5d-8003-5d46e639e02a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.185764 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/acf67771-0a1e-4d5d-8003-5d46e639e02a-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.651792 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.653257 4904 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0" exitCode=0 Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.660290 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"acf67771-0a1e-4d5d-8003-5d46e639e02a","Type":"ContainerDied","Data":"bb04ec70171af7a032fbcda25726419b5fa3ff4cf03195d7905a1c7cf2188ccd"} Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.660330 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb04ec70171af7a032fbcda25726419b5fa3ff4cf03195d7905a1c7cf2188ccd" Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.660353 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.690339 4904 status_manager.go:851] "Failed to get status for pod" podUID="acf67771-0a1e-4d5d-8003-5d46e639e02a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.690941 4904 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.694587 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.695488 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.696189 4904 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.696774 4904 status_manager.go:851] "Failed to get status for pod" podUID="acf67771-0a1e-4d5d-8003-5d46e639e02a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.697473 4904 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.794779 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.794918 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.795002 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.795075 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.795082 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.795157 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.795484 4904 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.795508 4904 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:29 crc kubenswrapper[4904]: I0223 10:10:29.795520 4904 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 23 10:10:30 crc kubenswrapper[4904]: E0223 10:10:30.319282 4904 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.138:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" volumeName="registry-storage" Feb 23 10:10:30 crc kubenswrapper[4904]: I0223 10:10:30.670810 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 10:10:30 crc kubenswrapper[4904]: I0223 10:10:30.671740 4904 scope.go:117] "RemoveContainer" containerID="ac229cc3e60167ba21b61fb15ff75b8a24a0341875e59422d482f9a1fbeacd4f" Feb 23 10:10:30 crc kubenswrapper[4904]: I0223 10:10:30.671823 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:10:30 crc kubenswrapper[4904]: I0223 10:10:30.686433 4904 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:30 crc kubenswrapper[4904]: I0223 10:10:30.687026 4904 status_manager.go:851] "Failed to get status for pod" podUID="acf67771-0a1e-4d5d-8003-5d46e639e02a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:30 crc kubenswrapper[4904]: I0223 10:10:30.687527 4904 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:30 crc kubenswrapper[4904]: I0223 10:10:30.694480 4904 scope.go:117] "RemoveContainer" containerID="5c6f1a91722cb93880e164aeb7c4a66405b5fd666645d8ae3a989f75abac6e4c" Feb 23 10:10:30 crc kubenswrapper[4904]: I0223 10:10:30.713684 4904 scope.go:117] "RemoveContainer" containerID="b7f56813fbbeb7dc906a1216e69e83ec3c17aeae85fc6b3885f0a052284692da" Feb 23 10:10:30 crc kubenswrapper[4904]: I0223 10:10:30.732884 4904 scope.go:117] "RemoveContainer" containerID="639dc031a7beb00eaa485b262cd341d5a29c49e058ca4a75329cf9f2123097c2" Feb 23 10:10:30 crc kubenswrapper[4904]: I0223 10:10:30.749216 4904 scope.go:117] "RemoveContainer" containerID="4ac781d036e51ea71968b9840e2bb17e324507ca7536550fea64beaa485a74b0" Feb 23 10:10:30 crc kubenswrapper[4904]: I0223 10:10:30.765087 4904 scope.go:117] "RemoveContainer" containerID="b56cc55a2831c85de0012503f2082186cfd873e858b48e7b91859c08bcdd0d90" Feb 23 10:10:31 crc kubenswrapper[4904]: I0223 10:10:31.270657 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 23 10:10:34 crc kubenswrapper[4904]: E0223 10:10:34.141896 4904 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.138:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1896d8640b06c138 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 10:10:27.610190136 +0000 UTC m=+261.030563649,LastTimestamp:2026-02-23 10:10:27.610190136 +0000 UTC m=+261.030563649,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 10:10:37 crc kubenswrapper[4904]: E0223 10:10:37.003516 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:10:37Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:10:37Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:10:37Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T10:10:37Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:37 crc kubenswrapper[4904]: E0223 10:10:37.004176 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:37 crc kubenswrapper[4904]: E0223 10:10:37.004358 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:37 crc kubenswrapper[4904]: E0223 10:10:37.004524 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:37 crc kubenswrapper[4904]: E0223 10:10:37.004700 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:37 crc kubenswrapper[4904]: E0223 10:10:37.004713 4904 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 10:10:37 crc kubenswrapper[4904]: I0223 10:10:37.259461 4904 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:37 crc kubenswrapper[4904]: I0223 10:10:37.262053 4904 status_manager.go:851] "Failed to get status for pod" podUID="acf67771-0a1e-4d5d-8003-5d46e639e02a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:37 crc kubenswrapper[4904]: E0223 10:10:37.287964 4904 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:37 crc kubenswrapper[4904]: E0223 10:10:37.288499 4904 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:37 crc kubenswrapper[4904]: E0223 10:10:37.300342 4904 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:37 crc kubenswrapper[4904]: E0223 10:10:37.300694 4904 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:37 crc kubenswrapper[4904]: E0223 10:10:37.301035 4904 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:37 crc kubenswrapper[4904]: I0223 10:10:37.301061 4904 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 23 10:10:37 crc kubenswrapper[4904]: E0223 10:10:37.301238 4904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" interval="200ms" Feb 23 10:10:37 crc kubenswrapper[4904]: E0223 10:10:37.502183 4904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" interval="400ms" Feb 23 10:10:37 crc kubenswrapper[4904]: E0223 10:10:37.903215 4904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" interval="800ms" Feb 23 10:10:38 crc kubenswrapper[4904]: E0223 10:10:38.704090 4904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" interval="1.6s" Feb 23 10:10:40 crc kubenswrapper[4904]: E0223 10:10:40.305117 4904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" interval="3.2s" Feb 23 10:10:40 crc kubenswrapper[4904]: I0223 10:10:40.731593 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 23 10:10:40 crc kubenswrapper[4904]: I0223 10:10:40.732887 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 23 10:10:40 crc kubenswrapper[4904]: I0223 10:10:40.732922 4904 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="00034118ac46fa80f07637055d7140743737693c2fb6b0f4bc1924c40c19eb94" exitCode=1 Feb 23 10:10:40 crc kubenswrapper[4904]: I0223 10:10:40.732953 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"00034118ac46fa80f07637055d7140743737693c2fb6b0f4bc1924c40c19eb94"} Feb 23 10:10:40 crc kubenswrapper[4904]: I0223 10:10:40.733438 4904 scope.go:117] "RemoveContainer" containerID="00034118ac46fa80f07637055d7140743737693c2fb6b0f4bc1924c40c19eb94" Feb 23 10:10:40 crc kubenswrapper[4904]: I0223 10:10:40.734450 4904 status_manager.go:851] "Failed to get status for pod" podUID="acf67771-0a1e-4d5d-8003-5d46e639e02a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:40 crc kubenswrapper[4904]: I0223 10:10:40.734773 4904 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:40 crc kubenswrapper[4904]: I0223 10:10:40.735107 4904 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:41 crc kubenswrapper[4904]: I0223 10:10:41.254998 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:10:41 crc kubenswrapper[4904]: I0223 10:10:41.255972 4904 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:41 crc kubenswrapper[4904]: I0223 10:10:41.256487 4904 status_manager.go:851] "Failed to get status for pod" podUID="acf67771-0a1e-4d5d-8003-5d46e639e02a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:41 crc kubenswrapper[4904]: I0223 10:10:41.256903 4904 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:41 crc kubenswrapper[4904]: I0223 10:10:41.275558 4904 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="02cfb4a5-eb71-48e2-b719-1dd674114a42" Feb 23 10:10:41 crc kubenswrapper[4904]: I0223 10:10:41.275596 4904 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="02cfb4a5-eb71-48e2-b719-1dd674114a42" Feb 23 10:10:41 crc kubenswrapper[4904]: E0223 10:10:41.276066 4904 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:10:41 crc kubenswrapper[4904]: I0223 10:10:41.276533 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:10:41 crc kubenswrapper[4904]: W0223 10:10:41.307524 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-39404eb7e0e07b91d5685b9b43fcb7e4ca947bc21f686a44ab214f77286e36e5 WatchSource:0}: Error finding container 39404eb7e0e07b91d5685b9b43fcb7e4ca947bc21f686a44ab214f77286e36e5: Status 404 returned error can't find the container with id 39404eb7e0e07b91d5685b9b43fcb7e4ca947bc21f686a44ab214f77286e36e5 Feb 23 10:10:41 crc kubenswrapper[4904]: I0223 10:10:41.741363 4904 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="7d31143eef2c541783e1e043bb481bb05ceeb48e69a8eca7a9bc8779eaff8dff" exitCode=0 Feb 23 10:10:41 crc kubenswrapper[4904]: I0223 10:10:41.741495 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"7d31143eef2c541783e1e043bb481bb05ceeb48e69a8eca7a9bc8779eaff8dff"} Feb 23 10:10:41 crc kubenswrapper[4904]: I0223 10:10:41.741554 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"39404eb7e0e07b91d5685b9b43fcb7e4ca947bc21f686a44ab214f77286e36e5"} Feb 23 10:10:41 crc kubenswrapper[4904]: I0223 10:10:41.742062 4904 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="02cfb4a5-eb71-48e2-b719-1dd674114a42" Feb 23 10:10:41 crc kubenswrapper[4904]: I0223 10:10:41.742107 4904 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="02cfb4a5-eb71-48e2-b719-1dd674114a42" Feb 23 10:10:41 crc kubenswrapper[4904]: E0223 10:10:41.742514 4904 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:10:41 crc kubenswrapper[4904]: I0223 10:10:41.742535 4904 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:41 crc kubenswrapper[4904]: I0223 10:10:41.743035 4904 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:41 crc kubenswrapper[4904]: I0223 10:10:41.743425 4904 status_manager.go:851] "Failed to get status for pod" podUID="acf67771-0a1e-4d5d-8003-5d46e639e02a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:41 crc kubenswrapper[4904]: I0223 10:10:41.744292 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 23 10:10:41 crc kubenswrapper[4904]: I0223 10:10:41.745652 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 23 10:10:41 crc kubenswrapper[4904]: I0223 10:10:41.745769 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a9e46c1e7322d46e0702294683ea9325a55df40dee953b419647cc5df4800ad0"} Feb 23 10:10:41 crc kubenswrapper[4904]: I0223 10:10:41.746472 4904 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:41 crc kubenswrapper[4904]: I0223 10:10:41.747027 4904 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:41 crc kubenswrapper[4904]: I0223 10:10:41.747457 4904 status_manager.go:851] "Failed to get status for pod" podUID="acf67771-0a1e-4d5d-8003-5d46e639e02a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.138:6443: connect: connection refused" Feb 23 10:10:42 crc kubenswrapper[4904]: I0223 10:10:42.763106 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"64fa1878fa260e14a8ceaad0b7a81cbb0aa15512edce553bcb063bd941ab713d"} Feb 23 10:10:42 crc kubenswrapper[4904]: I0223 10:10:42.763453 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5fa4208dab4cd039184036a9f3d198f87acde1a4c27330154154e1b6f74109bd"} Feb 23 10:10:42 crc kubenswrapper[4904]: I0223 10:10:42.763467 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e05c7be9e62197a823e9a69c139d1087af523ffd40ed807a79b56bf82980f620"} Feb 23 10:10:42 crc kubenswrapper[4904]: I0223 10:10:42.763477 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1735ba082bf508f4dcb519d8a411580c286cc9b74c198797c9eea418b8315f74"} Feb 23 10:10:43 crc kubenswrapper[4904]: I0223 10:10:43.769414 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"26b3c4a6a49690b1c08a1c830d810b5008ec93847f56a636ef38f79002754703"} Feb 23 10:10:43 crc kubenswrapper[4904]: I0223 10:10:43.769737 4904 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="02cfb4a5-eb71-48e2-b719-1dd674114a42" Feb 23 10:10:43 crc kubenswrapper[4904]: I0223 10:10:43.769757 4904 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="02cfb4a5-eb71-48e2-b719-1dd674114a42" Feb 23 10:10:43 crc kubenswrapper[4904]: I0223 10:10:43.769940 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:10:44 crc kubenswrapper[4904]: I0223 10:10:44.066466 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:10:46 crc kubenswrapper[4904]: I0223 10:10:46.278053 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:10:46 crc kubenswrapper[4904]: I0223 10:10:46.278332 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:10:46 crc kubenswrapper[4904]: I0223 10:10:46.289449 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:10:47 crc kubenswrapper[4904]: I0223 10:10:47.398588 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:10:47 crc kubenswrapper[4904]: I0223 10:10:47.398656 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:10:47 crc kubenswrapper[4904]: I0223 10:10:47.398758 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:10:47 crc kubenswrapper[4904]: I0223 10:10:47.399353 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde"} pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 10:10:47 crc kubenswrapper[4904]: I0223 10:10:47.399427 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" containerID="cri-o://6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde" gracePeriod=600 Feb 23 10:10:47 crc kubenswrapper[4904]: I0223 10:10:47.803204 4904 generic.go:334] "Generic (PLEG): container finished" podID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerID="6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde" exitCode=0 Feb 23 10:10:47 crc kubenswrapper[4904]: I0223 10:10:47.803313 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerDied","Data":"6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde"} Feb 23 10:10:47 crc kubenswrapper[4904]: I0223 10:10:47.803619 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerStarted","Data":"ae646e613a51e7f064fed7847be41ac44a2747d1308e11ccf82810a5d1a00115"} Feb 23 10:10:48 crc kubenswrapper[4904]: I0223 10:10:48.778592 4904 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:10:48 crc kubenswrapper[4904]: I0223 10:10:48.809968 4904 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="02cfb4a5-eb71-48e2-b719-1dd674114a42" Feb 23 10:10:48 crc kubenswrapper[4904]: I0223 10:10:48.810009 4904 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="02cfb4a5-eb71-48e2-b719-1dd674114a42" Feb 23 10:10:48 crc kubenswrapper[4904]: I0223 10:10:48.814214 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:10:48 crc kubenswrapper[4904]: I0223 10:10:48.881824 4904 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d8839cde-a6ae-404e-9c81-0bc860fef4a5" Feb 23 10:10:49 crc kubenswrapper[4904]: I0223 10:10:49.628039 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:10:49 crc kubenswrapper[4904]: I0223 10:10:49.628195 4904 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 23 10:10:49 crc kubenswrapper[4904]: I0223 10:10:49.628499 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 23 10:10:49 crc kubenswrapper[4904]: I0223 10:10:49.816930 4904 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="02cfb4a5-eb71-48e2-b719-1dd674114a42" Feb 23 10:10:49 crc kubenswrapper[4904]: I0223 10:10:49.816961 4904 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="02cfb4a5-eb71-48e2-b719-1dd674114a42" Feb 23 10:10:49 crc kubenswrapper[4904]: I0223 10:10:49.820096 4904 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d8839cde-a6ae-404e-9c81-0bc860fef4a5" Feb 23 10:10:58 crc kubenswrapper[4904]: I0223 10:10:58.133856 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 23 10:10:58 crc kubenswrapper[4904]: I0223 10:10:58.705196 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 23 10:10:58 crc kubenswrapper[4904]: I0223 10:10:58.975033 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 23 10:10:59 crc kubenswrapper[4904]: I0223 10:10:59.158093 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 23 10:10:59 crc kubenswrapper[4904]: I0223 10:10:59.342135 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 23 10:10:59 crc kubenswrapper[4904]: I0223 10:10:59.576976 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 23 10:10:59 crc kubenswrapper[4904]: I0223 10:10:59.948050 4904 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 23 10:10:59 crc kubenswrapper[4904]: I0223 10:10:59.948101 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 23 10:10:59 crc kubenswrapper[4904]: I0223 10:10:59.951109 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 23 10:10:59 crc kubenswrapper[4904]: I0223 10:10:59.951525 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 10:10:59 crc kubenswrapper[4904]: I0223 10:10:59.953809 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 23 10:10:59 crc kubenswrapper[4904]: I0223 10:10:59.956479 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 23 10:11:00 crc kubenswrapper[4904]: I0223 10:11:00.030799 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 23 10:11:00 crc kubenswrapper[4904]: I0223 10:11:00.149737 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 23 10:11:00 crc kubenswrapper[4904]: I0223 10:11:00.270455 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 23 10:11:00 crc kubenswrapper[4904]: I0223 10:11:00.371363 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 23 10:11:00 crc kubenswrapper[4904]: I0223 10:11:00.595780 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 23 10:11:01 crc kubenswrapper[4904]: I0223 10:11:01.316110 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 23 10:11:01 crc kubenswrapper[4904]: I0223 10:11:01.418945 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 23 10:11:01 crc kubenswrapper[4904]: I0223 10:11:01.435173 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 23 10:11:01 crc kubenswrapper[4904]: I0223 10:11:01.508894 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 23 10:11:01 crc kubenswrapper[4904]: I0223 10:11:01.526350 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 23 10:11:01 crc kubenswrapper[4904]: I0223 10:11:01.535703 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 23 10:11:01 crc kubenswrapper[4904]: I0223 10:11:01.579691 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 23 10:11:01 crc kubenswrapper[4904]: I0223 10:11:01.584547 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 10:11:01 crc kubenswrapper[4904]: I0223 10:11:01.629820 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 23 10:11:01 crc kubenswrapper[4904]: I0223 10:11:01.764325 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 23 10:11:01 crc kubenswrapper[4904]: I0223 10:11:01.826918 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 23 10:11:01 crc kubenswrapper[4904]: I0223 10:11:01.866350 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 23 10:11:01 crc kubenswrapper[4904]: I0223 10:11:01.874654 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 23 10:11:01 crc kubenswrapper[4904]: I0223 10:11:01.881786 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 23 10:11:01 crc kubenswrapper[4904]: I0223 10:11:01.890850 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 23 10:11:01 crc kubenswrapper[4904]: I0223 10:11:01.949232 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 23 10:11:01 crc kubenswrapper[4904]: I0223 10:11:01.965527 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 23 10:11:02 crc kubenswrapper[4904]: I0223 10:11:02.072922 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 23 10:11:02 crc kubenswrapper[4904]: I0223 10:11:02.148398 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 23 10:11:02 crc kubenswrapper[4904]: I0223 10:11:02.177960 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 23 10:11:02 crc kubenswrapper[4904]: I0223 10:11:02.183823 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 23 10:11:02 crc kubenswrapper[4904]: I0223 10:11:02.218752 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 23 10:11:02 crc kubenswrapper[4904]: I0223 10:11:02.297702 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 23 10:11:02 crc kubenswrapper[4904]: I0223 10:11:02.538167 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 23 10:11:02 crc kubenswrapper[4904]: I0223 10:11:02.638561 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 23 10:11:02 crc kubenswrapper[4904]: I0223 10:11:02.646179 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 23 10:11:02 crc kubenswrapper[4904]: I0223 10:11:02.656873 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 23 10:11:02 crc kubenswrapper[4904]: I0223 10:11:02.720200 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 23 10:11:02 crc kubenswrapper[4904]: I0223 10:11:02.736755 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 23 10:11:02 crc kubenswrapper[4904]: I0223 10:11:02.761088 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 23 10:11:02 crc kubenswrapper[4904]: I0223 10:11:02.908843 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 23 10:11:02 crc kubenswrapper[4904]: I0223 10:11:02.955652 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 23 10:11:02 crc kubenswrapper[4904]: I0223 10:11:02.955841 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 23 10:11:03 crc kubenswrapper[4904]: I0223 10:11:03.110398 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 23 10:11:03 crc kubenswrapper[4904]: I0223 10:11:03.134343 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 23 10:11:03 crc kubenswrapper[4904]: I0223 10:11:03.314378 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 23 10:11:03 crc kubenswrapper[4904]: I0223 10:11:03.384356 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 23 10:11:03 crc kubenswrapper[4904]: I0223 10:11:03.511693 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 23 10:11:03 crc kubenswrapper[4904]: I0223 10:11:03.518536 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 23 10:11:03 crc kubenswrapper[4904]: I0223 10:11:03.705051 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 23 10:11:03 crc kubenswrapper[4904]: I0223 10:11:03.799750 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 23 10:11:04 crc kubenswrapper[4904]: I0223 10:11:04.089743 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 23 10:11:04 crc kubenswrapper[4904]: I0223 10:11:04.124911 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 10:11:04 crc kubenswrapper[4904]: I0223 10:11:04.374675 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 23 10:11:04 crc kubenswrapper[4904]: I0223 10:11:04.378169 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 23 10:11:04 crc kubenswrapper[4904]: I0223 10:11:04.439880 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 23 10:11:04 crc kubenswrapper[4904]: I0223 10:11:04.446412 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 10:11:04 crc kubenswrapper[4904]: I0223 10:11:04.477154 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 23 10:11:04 crc kubenswrapper[4904]: I0223 10:11:04.502727 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 23 10:11:04 crc kubenswrapper[4904]: I0223 10:11:04.572669 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 23 10:11:04 crc kubenswrapper[4904]: I0223 10:11:04.613274 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 23 10:11:04 crc kubenswrapper[4904]: I0223 10:11:04.632879 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 23 10:11:04 crc kubenswrapper[4904]: I0223 10:11:04.660685 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 23 10:11:04 crc kubenswrapper[4904]: I0223 10:11:04.855313 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 23 10:11:04 crc kubenswrapper[4904]: I0223 10:11:04.889643 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 23 10:11:04 crc kubenswrapper[4904]: I0223 10:11:04.897103 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 23 10:11:04 crc kubenswrapper[4904]: I0223 10:11:04.920705 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 23 10:11:04 crc kubenswrapper[4904]: I0223 10:11:04.936822 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 23 10:11:04 crc kubenswrapper[4904]: I0223 10:11:04.975216 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 23 10:11:05 crc kubenswrapper[4904]: I0223 10:11:05.118842 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 23 10:11:05 crc kubenswrapper[4904]: I0223 10:11:05.137883 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 23 10:11:05 crc kubenswrapper[4904]: I0223 10:11:05.138739 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 23 10:11:05 crc kubenswrapper[4904]: I0223 10:11:05.158996 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 10:11:05 crc kubenswrapper[4904]: I0223 10:11:05.164392 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 23 10:11:05 crc kubenswrapper[4904]: I0223 10:11:05.197573 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 23 10:11:05 crc kubenswrapper[4904]: I0223 10:11:05.235061 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 23 10:11:05 crc kubenswrapper[4904]: I0223 10:11:05.240188 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 23 10:11:05 crc kubenswrapper[4904]: I0223 10:11:05.282427 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 23 10:11:05 crc kubenswrapper[4904]: I0223 10:11:05.338993 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 23 10:11:05 crc kubenswrapper[4904]: I0223 10:11:05.349261 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 23 10:11:05 crc kubenswrapper[4904]: I0223 10:11:05.407088 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 23 10:11:05 crc kubenswrapper[4904]: I0223 10:11:05.537252 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 23 10:11:05 crc kubenswrapper[4904]: I0223 10:11:05.537253 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 23 10:11:05 crc kubenswrapper[4904]: I0223 10:11:05.561804 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 23 10:11:05 crc kubenswrapper[4904]: I0223 10:11:05.566409 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 23 10:11:05 crc kubenswrapper[4904]: I0223 10:11:05.595708 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 23 10:11:05 crc kubenswrapper[4904]: I0223 10:11:05.619829 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 23 10:11:05 crc kubenswrapper[4904]: I0223 10:11:05.683780 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 10:11:05 crc kubenswrapper[4904]: I0223 10:11:05.784919 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 23 10:11:05 crc kubenswrapper[4904]: I0223 10:11:05.832746 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 10:11:05 crc kubenswrapper[4904]: I0223 10:11:05.909882 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 23 10:11:05 crc kubenswrapper[4904]: I0223 10:11:05.960618 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 23 10:11:05 crc kubenswrapper[4904]: I0223 10:11:05.976475 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 23 10:11:06 crc kubenswrapper[4904]: I0223 10:11:06.007036 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 23 10:11:06 crc kubenswrapper[4904]: I0223 10:11:06.015815 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 23 10:11:06 crc kubenswrapper[4904]: I0223 10:11:06.023590 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 23 10:11:06 crc kubenswrapper[4904]: I0223 10:11:06.023896 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 23 10:11:06 crc kubenswrapper[4904]: I0223 10:11:06.038906 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 23 10:11:06 crc kubenswrapper[4904]: I0223 10:11:06.157843 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 23 10:11:06 crc kubenswrapper[4904]: I0223 10:11:06.172710 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 23 10:11:06 crc kubenswrapper[4904]: I0223 10:11:06.236960 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 23 10:11:06 crc kubenswrapper[4904]: I0223 10:11:06.528968 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 23 10:11:06 crc kubenswrapper[4904]: I0223 10:11:06.549380 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 23 10:11:06 crc kubenswrapper[4904]: I0223 10:11:06.560838 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 23 10:11:06 crc kubenswrapper[4904]: I0223 10:11:06.575412 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 23 10:11:06 crc kubenswrapper[4904]: I0223 10:11:06.634654 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 23 10:11:06 crc kubenswrapper[4904]: I0223 10:11:06.668440 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 23 10:11:06 crc kubenswrapper[4904]: I0223 10:11:06.748148 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 23 10:11:06 crc kubenswrapper[4904]: I0223 10:11:06.798373 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 23 10:11:06 crc kubenswrapper[4904]: I0223 10:11:06.911494 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 23 10:11:06 crc kubenswrapper[4904]: I0223 10:11:06.911667 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 23 10:11:06 crc kubenswrapper[4904]: I0223 10:11:06.972609 4904 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 23 10:11:06 crc kubenswrapper[4904]: I0223 10:11:06.975964 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=39.97593939 podStartE2EDuration="39.97593939s" podCreationTimestamp="2026-02-23 10:10:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:10:48.799754803 +0000 UTC m=+282.220128326" watchObservedRunningTime="2026-02-23 10:11:06.97593939 +0000 UTC m=+300.396312913" Feb 23 10:11:06 crc kubenswrapper[4904]: I0223 10:11:06.979003 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 10:11:06 crc kubenswrapper[4904]: I0223 10:11:06.979080 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 10:11:06 crc kubenswrapper[4904]: I0223 10:11:06.981308 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 23 10:11:06 crc kubenswrapper[4904]: I0223 10:11:06.983924 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 10:11:07 crc kubenswrapper[4904]: I0223 10:11:07.003018 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.002993624 podStartE2EDuration="19.002993624s" podCreationTimestamp="2026-02-23 10:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:11:06.997916589 +0000 UTC m=+300.418290112" watchObservedRunningTime="2026-02-23 10:11:07.002993624 +0000 UTC m=+300.423367157" Feb 23 10:11:07 crc kubenswrapper[4904]: I0223 10:11:07.032032 4904 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 23 10:11:07 crc kubenswrapper[4904]: I0223 10:11:07.098959 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 23 10:11:07 crc kubenswrapper[4904]: I0223 10:11:07.101041 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 23 10:11:07 crc kubenswrapper[4904]: I0223 10:11:07.112433 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 23 10:11:07 crc kubenswrapper[4904]: I0223 10:11:07.145610 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 23 10:11:07 crc kubenswrapper[4904]: I0223 10:11:07.163477 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 23 10:11:07 crc kubenswrapper[4904]: I0223 10:11:07.262153 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 23 10:11:07 crc kubenswrapper[4904]: I0223 10:11:07.268550 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 23 10:11:07 crc kubenswrapper[4904]: I0223 10:11:07.410760 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 23 10:11:07 crc kubenswrapper[4904]: I0223 10:11:07.533626 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 23 10:11:07 crc kubenswrapper[4904]: I0223 10:11:07.537292 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 23 10:11:07 crc kubenswrapper[4904]: I0223 10:11:07.566764 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 23 10:11:07 crc kubenswrapper[4904]: I0223 10:11:07.623657 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 23 10:11:07 crc kubenswrapper[4904]: I0223 10:11:07.623790 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 23 10:11:07 crc kubenswrapper[4904]: I0223 10:11:07.761195 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 23 10:11:07 crc kubenswrapper[4904]: I0223 10:11:07.771952 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 23 10:11:07 crc kubenswrapper[4904]: I0223 10:11:07.785592 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 23 10:11:07 crc kubenswrapper[4904]: I0223 10:11:07.927309 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 23 10:11:07 crc kubenswrapper[4904]: I0223 10:11:07.971954 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 23 10:11:07 crc kubenswrapper[4904]: I0223 10:11:07.984001 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 23 10:11:08 crc kubenswrapper[4904]: I0223 10:11:08.072671 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 23 10:11:08 crc kubenswrapper[4904]: I0223 10:11:08.181495 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 23 10:11:08 crc kubenswrapper[4904]: I0223 10:11:08.331787 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 23 10:11:08 crc kubenswrapper[4904]: I0223 10:11:08.383820 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 23 10:11:08 crc kubenswrapper[4904]: I0223 10:11:08.592475 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 23 10:11:08 crc kubenswrapper[4904]: I0223 10:11:08.600939 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 23 10:11:08 crc kubenswrapper[4904]: I0223 10:11:08.616401 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 23 10:11:08 crc kubenswrapper[4904]: I0223 10:11:08.670628 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 23 10:11:08 crc kubenswrapper[4904]: I0223 10:11:08.673361 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 23 10:11:08 crc kubenswrapper[4904]: I0223 10:11:08.681987 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 23 10:11:08 crc kubenswrapper[4904]: I0223 10:11:08.819161 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 23 10:11:08 crc kubenswrapper[4904]: I0223 10:11:08.823450 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 23 10:11:08 crc kubenswrapper[4904]: I0223 10:11:08.823995 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 23 10:11:08 crc kubenswrapper[4904]: I0223 10:11:08.829694 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 23 10:11:08 crc kubenswrapper[4904]: I0223 10:11:08.894053 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 23 10:11:09 crc kubenswrapper[4904]: I0223 10:11:09.047885 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 23 10:11:09 crc kubenswrapper[4904]: I0223 10:11:09.062210 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 23 10:11:09 crc kubenswrapper[4904]: I0223 10:11:09.108287 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 23 10:11:09 crc kubenswrapper[4904]: I0223 10:11:09.209657 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 23 10:11:09 crc kubenswrapper[4904]: I0223 10:11:09.226776 4904 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 23 10:11:09 crc kubenswrapper[4904]: I0223 10:11:09.285590 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 23 10:11:09 crc kubenswrapper[4904]: I0223 10:11:09.327650 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 23 10:11:09 crc kubenswrapper[4904]: I0223 10:11:09.355017 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 23 10:11:09 crc kubenswrapper[4904]: I0223 10:11:09.436177 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 10:11:09 crc kubenswrapper[4904]: I0223 10:11:09.478426 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 23 10:11:09 crc kubenswrapper[4904]: I0223 10:11:09.500657 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 23 10:11:09 crc kubenswrapper[4904]: I0223 10:11:09.627758 4904 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 23 10:11:09 crc kubenswrapper[4904]: I0223 10:11:09.627827 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 23 10:11:09 crc kubenswrapper[4904]: I0223 10:11:09.627888 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:11:09 crc kubenswrapper[4904]: I0223 10:11:09.628564 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"a9e46c1e7322d46e0702294683ea9325a55df40dee953b419647cc5df4800ad0"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 23 10:11:09 crc kubenswrapper[4904]: I0223 10:11:09.628669 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://a9e46c1e7322d46e0702294683ea9325a55df40dee953b419647cc5df4800ad0" gracePeriod=30 Feb 23 10:11:09 crc kubenswrapper[4904]: I0223 10:11:09.723050 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 23 10:11:09 crc kubenswrapper[4904]: I0223 10:11:09.829283 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 23 10:11:09 crc kubenswrapper[4904]: I0223 10:11:09.878270 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 23 10:11:09 crc kubenswrapper[4904]: I0223 10:11:09.987523 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 23 10:11:09 crc kubenswrapper[4904]: I0223 10:11:09.988134 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 23 10:11:10 crc kubenswrapper[4904]: I0223 10:11:10.115278 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 23 10:11:10 crc kubenswrapper[4904]: I0223 10:11:10.144563 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 23 10:11:10 crc kubenswrapper[4904]: I0223 10:11:10.189925 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 23 10:11:10 crc kubenswrapper[4904]: I0223 10:11:10.222806 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 23 10:11:10 crc kubenswrapper[4904]: I0223 10:11:10.277243 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 23 10:11:10 crc kubenswrapper[4904]: I0223 10:11:10.277791 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 23 10:11:10 crc kubenswrapper[4904]: I0223 10:11:10.292083 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 23 10:11:10 crc kubenswrapper[4904]: I0223 10:11:10.403738 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 23 10:11:10 crc kubenswrapper[4904]: I0223 10:11:10.424156 4904 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 23 10:11:10 crc kubenswrapper[4904]: I0223 10:11:10.454706 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 10:11:10 crc kubenswrapper[4904]: I0223 10:11:10.456066 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 10:11:10 crc kubenswrapper[4904]: I0223 10:11:10.458810 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 23 10:11:10 crc kubenswrapper[4904]: I0223 10:11:10.492138 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 23 10:11:10 crc kubenswrapper[4904]: I0223 10:11:10.520283 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 23 10:11:10 crc kubenswrapper[4904]: I0223 10:11:10.555702 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 23 10:11:10 crc kubenswrapper[4904]: I0223 10:11:10.556876 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 23 10:11:10 crc kubenswrapper[4904]: I0223 10:11:10.784537 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 23 10:11:10 crc kubenswrapper[4904]: I0223 10:11:10.827592 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 23 10:11:10 crc kubenswrapper[4904]: I0223 10:11:10.838282 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 23 10:11:11 crc kubenswrapper[4904]: I0223 10:11:11.000771 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 23 10:11:11 crc kubenswrapper[4904]: I0223 10:11:11.084993 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 23 10:11:11 crc kubenswrapper[4904]: I0223 10:11:11.261362 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 23 10:11:11 crc kubenswrapper[4904]: I0223 10:11:11.261628 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 10:11:11 crc kubenswrapper[4904]: I0223 10:11:11.303426 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 23 10:11:11 crc kubenswrapper[4904]: I0223 10:11:11.314904 4904 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 23 10:11:11 crc kubenswrapper[4904]: I0223 10:11:11.315337 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ee4a6e8c7a56b008d06625053546bc99eed1f964fb19b05ae1f4707cdba90968" gracePeriod=5 Feb 23 10:11:11 crc kubenswrapper[4904]: I0223 10:11:11.368705 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 23 10:11:11 crc kubenswrapper[4904]: I0223 10:11:11.474745 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 23 10:11:11 crc kubenswrapper[4904]: I0223 10:11:11.481340 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 10:11:11 crc kubenswrapper[4904]: I0223 10:11:11.502588 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 23 10:11:11 crc kubenswrapper[4904]: I0223 10:11:11.507871 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 23 10:11:11 crc kubenswrapper[4904]: I0223 10:11:11.708966 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 23 10:11:11 crc kubenswrapper[4904]: I0223 10:11:11.873974 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 23 10:11:11 crc kubenswrapper[4904]: I0223 10:11:11.926609 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 23 10:11:11 crc kubenswrapper[4904]: I0223 10:11:11.983919 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 23 10:11:11 crc kubenswrapper[4904]: I0223 10:11:11.988890 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.078206 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.154808 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.217296 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.274593 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.286420 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.296677 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.401015 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.449224 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.452315 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.539478 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.709590 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.740175 4904 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.774455 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.781605 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.789377 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8f9wz"] Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.790021 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8f9wz" podUID="e82107db-3789-4ed9-8b7a-7dc968cb833f" containerName="registry-server" containerID="cri-o://b753d60449f0fef8908b66b1f333f64a1fcbad31debb3ca2cc7a5cf8795c6431" gracePeriod=30 Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.794367 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cf449"] Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.795504 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cf449" podUID="2ae8aff6-7d01-4352-b974-418342d434b3" containerName="registry-server" containerID="cri-o://77315dedf95435116fc29789d760c3d10059d3a8d3d30d2adff882f9146f0fad" gracePeriod=30 Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.810134 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5bfn9"] Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.810625 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-5bfn9" podUID="74d9be42-ba8c-426b-b4b0-bad0bc65648b" containerName="marketplace-operator" containerID="cri-o://bb2992648e779ed50920e7653b5883f606c9e678933be0e56e072a7475f31f83" gracePeriod=30 Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.811969 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.813403 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.826494 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mlwp"] Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.826974 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6mlwp" podUID="46dff5e0-cb39-492e-95c6-33e67169ef87" containerName="registry-server" containerID="cri-o://ef29420b80436415ec0be25f2f51f11011887ab8c524980d67832f2ceb183096" gracePeriod=30 Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.835023 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j6l6g"] Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.835268 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j6l6g" podUID="2d608de8-d727-4542-b6aa-4fedda2eaa3f" containerName="registry-server" containerID="cri-o://3364027a197a57b7410e29e5b581fdf5b25d085cd360bacdc181ac5034be108b" gracePeriod=30 Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.871323 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.884502 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.886676 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.963707 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 23 10:11:12 crc kubenswrapper[4904]: I0223 10:11:12.974754 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.020853 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.033282 4904 generic.go:334] "Generic (PLEG): container finished" podID="e82107db-3789-4ed9-8b7a-7dc968cb833f" containerID="b753d60449f0fef8908b66b1f333f64a1fcbad31debb3ca2cc7a5cf8795c6431" exitCode=0 Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.033353 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8f9wz" event={"ID":"e82107db-3789-4ed9-8b7a-7dc968cb833f","Type":"ContainerDied","Data":"b753d60449f0fef8908b66b1f333f64a1fcbad31debb3ca2cc7a5cf8795c6431"} Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.037695 4904 generic.go:334] "Generic (PLEG): container finished" podID="74d9be42-ba8c-426b-b4b0-bad0bc65648b" containerID="bb2992648e779ed50920e7653b5883f606c9e678933be0e56e072a7475f31f83" exitCode=0 Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.037817 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5bfn9" event={"ID":"74d9be42-ba8c-426b-b4b0-bad0bc65648b","Type":"ContainerDied","Data":"bb2992648e779ed50920e7653b5883f606c9e678933be0e56e072a7475f31f83"} Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.047848 4904 generic.go:334] "Generic (PLEG): container finished" podID="46dff5e0-cb39-492e-95c6-33e67169ef87" containerID="ef29420b80436415ec0be25f2f51f11011887ab8c524980d67832f2ceb183096" exitCode=0 Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.047951 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mlwp" event={"ID":"46dff5e0-cb39-492e-95c6-33e67169ef87","Type":"ContainerDied","Data":"ef29420b80436415ec0be25f2f51f11011887ab8c524980d67832f2ceb183096"} Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.054999 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.061546 4904 generic.go:334] "Generic (PLEG): container finished" podID="2d608de8-d727-4542-b6aa-4fedda2eaa3f" containerID="3364027a197a57b7410e29e5b581fdf5b25d085cd360bacdc181ac5034be108b" exitCode=0 Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.061635 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6l6g" event={"ID":"2d608de8-d727-4542-b6aa-4fedda2eaa3f","Type":"ContainerDied","Data":"3364027a197a57b7410e29e5b581fdf5b25d085cd360bacdc181ac5034be108b"} Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.064239 4904 generic.go:334] "Generic (PLEG): container finished" podID="2ae8aff6-7d01-4352-b974-418342d434b3" containerID="77315dedf95435116fc29789d760c3d10059d3a8d3d30d2adff882f9146f0fad" exitCode=0 Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.064324 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cf449" event={"ID":"2ae8aff6-7d01-4352-b974-418342d434b3","Type":"ContainerDied","Data":"77315dedf95435116fc29789d760c3d10059d3a8d3d30d2adff882f9146f0fad"} Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.083916 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.201183 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.274071 4904 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.318366 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5bfn9" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.397302 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.404784 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8f9wz" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.406161 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j6l6g" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.410663 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cf449" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.415424 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mlwp" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.423547 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74d9be42-ba8c-426b-b4b0-bad0bc65648b-marketplace-trusted-ca\") pod \"74d9be42-ba8c-426b-b4b0-bad0bc65648b\" (UID: \"74d9be42-ba8c-426b-b4b0-bad0bc65648b\") " Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.423662 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74d9be42-ba8c-426b-b4b0-bad0bc65648b-marketplace-operator-metrics\") pod \"74d9be42-ba8c-426b-b4b0-bad0bc65648b\" (UID: \"74d9be42-ba8c-426b-b4b0-bad0bc65648b\") " Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.423736 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ljpz\" (UniqueName: \"kubernetes.io/projected/74d9be42-ba8c-426b-b4b0-bad0bc65648b-kube-api-access-2ljpz\") pod \"74d9be42-ba8c-426b-b4b0-bad0bc65648b\" (UID: \"74d9be42-ba8c-426b-b4b0-bad0bc65648b\") " Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.425435 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74d9be42-ba8c-426b-b4b0-bad0bc65648b-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "74d9be42-ba8c-426b-b4b0-bad0bc65648b" (UID: "74d9be42-ba8c-426b-b4b0-bad0bc65648b"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.429180 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74d9be42-ba8c-426b-b4b0-bad0bc65648b-kube-api-access-2ljpz" (OuterVolumeSpecName: "kube-api-access-2ljpz") pod "74d9be42-ba8c-426b-b4b0-bad0bc65648b" (UID: "74d9be42-ba8c-426b-b4b0-bad0bc65648b"). InnerVolumeSpecName "kube-api-access-2ljpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.433441 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d9be42-ba8c-426b-b4b0-bad0bc65648b-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "74d9be42-ba8c-426b-b4b0-bad0bc65648b" (UID: "74d9be42-ba8c-426b-b4b0-bad0bc65648b"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.526012 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzzt7\" (UniqueName: \"kubernetes.io/projected/46dff5e0-cb39-492e-95c6-33e67169ef87-kube-api-access-kzzt7\") pod \"46dff5e0-cb39-492e-95c6-33e67169ef87\" (UID: \"46dff5e0-cb39-492e-95c6-33e67169ef87\") " Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.526084 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82c52\" (UniqueName: \"kubernetes.io/projected/e82107db-3789-4ed9-8b7a-7dc968cb833f-kube-api-access-82c52\") pod \"e82107db-3789-4ed9-8b7a-7dc968cb833f\" (UID: \"e82107db-3789-4ed9-8b7a-7dc968cb833f\") " Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.526113 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46dff5e0-cb39-492e-95c6-33e67169ef87-catalog-content\") pod \"46dff5e0-cb39-492e-95c6-33e67169ef87\" (UID: \"46dff5e0-cb39-492e-95c6-33e67169ef87\") " Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.526178 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d608de8-d727-4542-b6aa-4fedda2eaa3f-catalog-content\") pod \"2d608de8-d727-4542-b6aa-4fedda2eaa3f\" (UID: \"2d608de8-d727-4542-b6aa-4fedda2eaa3f\") " Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.526216 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46dff5e0-cb39-492e-95c6-33e67169ef87-utilities\") pod \"46dff5e0-cb39-492e-95c6-33e67169ef87\" (UID: \"46dff5e0-cb39-492e-95c6-33e67169ef87\") " Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.526278 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e82107db-3789-4ed9-8b7a-7dc968cb833f-catalog-content\") pod \"e82107db-3789-4ed9-8b7a-7dc968cb833f\" (UID: \"e82107db-3789-4ed9-8b7a-7dc968cb833f\") " Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.526335 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ae8aff6-7d01-4352-b974-418342d434b3-utilities\") pod \"2ae8aff6-7d01-4352-b974-418342d434b3\" (UID: \"2ae8aff6-7d01-4352-b974-418342d434b3\") " Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.526360 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f64dk\" (UniqueName: \"kubernetes.io/projected/2ae8aff6-7d01-4352-b974-418342d434b3-kube-api-access-f64dk\") pod \"2ae8aff6-7d01-4352-b974-418342d434b3\" (UID: \"2ae8aff6-7d01-4352-b974-418342d434b3\") " Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.526425 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pklwb\" (UniqueName: \"kubernetes.io/projected/2d608de8-d727-4542-b6aa-4fedda2eaa3f-kube-api-access-pklwb\") pod \"2d608de8-d727-4542-b6aa-4fedda2eaa3f\" (UID: \"2d608de8-d727-4542-b6aa-4fedda2eaa3f\") " Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.526469 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d608de8-d727-4542-b6aa-4fedda2eaa3f-utilities\") pod \"2d608de8-d727-4542-b6aa-4fedda2eaa3f\" (UID: \"2d608de8-d727-4542-b6aa-4fedda2eaa3f\") " Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.526495 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e82107db-3789-4ed9-8b7a-7dc968cb833f-utilities\") pod \"e82107db-3789-4ed9-8b7a-7dc968cb833f\" (UID: \"e82107db-3789-4ed9-8b7a-7dc968cb833f\") " Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.526533 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ae8aff6-7d01-4352-b974-418342d434b3-catalog-content\") pod \"2ae8aff6-7d01-4352-b974-418342d434b3\" (UID: \"2ae8aff6-7d01-4352-b974-418342d434b3\") " Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.526841 4904 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74d9be42-ba8c-426b-b4b0-bad0bc65648b-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.526867 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ljpz\" (UniqueName: \"kubernetes.io/projected/74d9be42-ba8c-426b-b4b0-bad0bc65648b-kube-api-access-2ljpz\") on node \"crc\" DevicePath \"\"" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.526878 4904 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74d9be42-ba8c-426b-b4b0-bad0bc65648b-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.527878 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46dff5e0-cb39-492e-95c6-33e67169ef87-utilities" (OuterVolumeSpecName: "utilities") pod "46dff5e0-cb39-492e-95c6-33e67169ef87" (UID: "46dff5e0-cb39-492e-95c6-33e67169ef87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.528047 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d608de8-d727-4542-b6aa-4fedda2eaa3f-utilities" (OuterVolumeSpecName: "utilities") pod "2d608de8-d727-4542-b6aa-4fedda2eaa3f" (UID: "2d608de8-d727-4542-b6aa-4fedda2eaa3f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.528846 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e82107db-3789-4ed9-8b7a-7dc968cb833f-utilities" (OuterVolumeSpecName: "utilities") pod "e82107db-3789-4ed9-8b7a-7dc968cb833f" (UID: "e82107db-3789-4ed9-8b7a-7dc968cb833f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.529888 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e82107db-3789-4ed9-8b7a-7dc968cb833f-kube-api-access-82c52" (OuterVolumeSpecName: "kube-api-access-82c52") pod "e82107db-3789-4ed9-8b7a-7dc968cb833f" (UID: "e82107db-3789-4ed9-8b7a-7dc968cb833f"). InnerVolumeSpecName "kube-api-access-82c52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.530282 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ae8aff6-7d01-4352-b974-418342d434b3-kube-api-access-f64dk" (OuterVolumeSpecName: "kube-api-access-f64dk") pod "2ae8aff6-7d01-4352-b974-418342d434b3" (UID: "2ae8aff6-7d01-4352-b974-418342d434b3"). InnerVolumeSpecName "kube-api-access-f64dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.541922 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ae8aff6-7d01-4352-b974-418342d434b3-utilities" (OuterVolumeSpecName: "utilities") pod "2ae8aff6-7d01-4352-b974-418342d434b3" (UID: "2ae8aff6-7d01-4352-b974-418342d434b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.544323 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d608de8-d727-4542-b6aa-4fedda2eaa3f-kube-api-access-pklwb" (OuterVolumeSpecName: "kube-api-access-pklwb") pod "2d608de8-d727-4542-b6aa-4fedda2eaa3f" (UID: "2d608de8-d727-4542-b6aa-4fedda2eaa3f"). InnerVolumeSpecName "kube-api-access-pklwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.546204 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46dff5e0-cb39-492e-95c6-33e67169ef87-kube-api-access-kzzt7" (OuterVolumeSpecName: "kube-api-access-kzzt7") pod "46dff5e0-cb39-492e-95c6-33e67169ef87" (UID: "46dff5e0-cb39-492e-95c6-33e67169ef87"). InnerVolumeSpecName "kube-api-access-kzzt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.551565 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46dff5e0-cb39-492e-95c6-33e67169ef87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46dff5e0-cb39-492e-95c6-33e67169ef87" (UID: "46dff5e0-cb39-492e-95c6-33e67169ef87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.556857 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.590991 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.593811 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.600361 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ae8aff6-7d01-4352-b974-418342d434b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ae8aff6-7d01-4352-b974-418342d434b3" (UID: "2ae8aff6-7d01-4352-b974-418342d434b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.604427 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e82107db-3789-4ed9-8b7a-7dc968cb833f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e82107db-3789-4ed9-8b7a-7dc968cb833f" (UID: "e82107db-3789-4ed9-8b7a-7dc968cb833f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.628243 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzzt7\" (UniqueName: \"kubernetes.io/projected/46dff5e0-cb39-492e-95c6-33e67169ef87-kube-api-access-kzzt7\") on node \"crc\" DevicePath \"\"" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.628285 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82c52\" (UniqueName: \"kubernetes.io/projected/e82107db-3789-4ed9-8b7a-7dc968cb833f-kube-api-access-82c52\") on node \"crc\" DevicePath \"\"" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.628299 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46dff5e0-cb39-492e-95c6-33e67169ef87-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.628312 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46dff5e0-cb39-492e-95c6-33e67169ef87-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.628325 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e82107db-3789-4ed9-8b7a-7dc968cb833f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.628337 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ae8aff6-7d01-4352-b974-418342d434b3-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.628348 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f64dk\" (UniqueName: \"kubernetes.io/projected/2ae8aff6-7d01-4352-b974-418342d434b3-kube-api-access-f64dk\") on node \"crc\" DevicePath \"\"" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.628360 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pklwb\" (UniqueName: \"kubernetes.io/projected/2d608de8-d727-4542-b6aa-4fedda2eaa3f-kube-api-access-pklwb\") on node \"crc\" DevicePath \"\"" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.628372 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d608de8-d727-4542-b6aa-4fedda2eaa3f-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.628384 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e82107db-3789-4ed9-8b7a-7dc968cb833f-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.628395 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ae8aff6-7d01-4352-b974-418342d434b3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.671195 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d608de8-d727-4542-b6aa-4fedda2eaa3f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d608de8-d727-4542-b6aa-4fedda2eaa3f" (UID: "2d608de8-d727-4542-b6aa-4fedda2eaa3f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.729201 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d608de8-d727-4542-b6aa-4fedda2eaa3f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.741946 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.791761 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.861302 4904 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.866392 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 23 10:11:13 crc kubenswrapper[4904]: I0223 10:11:13.952941 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.073056 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j6l6g" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.073049 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6l6g" event={"ID":"2d608de8-d727-4542-b6aa-4fedda2eaa3f","Type":"ContainerDied","Data":"713182297aaf7f71996b6430eca0604e06d9b5f792f07ea387a7c45d714e5834"} Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.073258 4904 scope.go:117] "RemoveContainer" containerID="3364027a197a57b7410e29e5b581fdf5b25d085cd360bacdc181ac5034be108b" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.077167 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cf449" event={"ID":"2ae8aff6-7d01-4352-b974-418342d434b3","Type":"ContainerDied","Data":"fef1cfee109cffd295edff27934fe6ac133928ce91e82fd18ebd9f986879da45"} Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.077356 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cf449" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.080512 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8f9wz" event={"ID":"e82107db-3789-4ed9-8b7a-7dc968cb833f","Type":"ContainerDied","Data":"9c7667ea1208b007d7a93fe02462ca88457f9be08406548a079dd2fcdacbefcb"} Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.080626 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8f9wz" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.087402 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5bfn9" event={"ID":"74d9be42-ba8c-426b-b4b0-bad0bc65648b","Type":"ContainerDied","Data":"478048890c09a8ee1aa9b4c5160738db11279d2032d3b23666e95a03d028518f"} Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.087458 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5bfn9" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.092992 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mlwp" event={"ID":"46dff5e0-cb39-492e-95c6-33e67169ef87","Type":"ContainerDied","Data":"e8c4df4960650366dbc0b7fea9bf5bbe671580245d19deb32560b10b7084d31c"} Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.093030 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mlwp" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.109356 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.111346 4904 scope.go:117] "RemoveContainer" containerID="0b3f67940aba8b56fbb10735874feb708cb44320b7792db52f78708de7c04ab4" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.132087 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.135063 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.145962 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j6l6g"] Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.149093 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j6l6g"] Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.165398 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.175504 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8f9wz"] Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.176828 4904 scope.go:117] "RemoveContainer" containerID="ccf44b1147dd1cfa9e51a9a4bf3e5b382eca83539c94edd06f227d8c5a0be1b2" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.187694 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8f9wz"] Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.198215 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5bfn9"] Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.205428 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5bfn9"] Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.207737 4904 scope.go:117] "RemoveContainer" containerID="77315dedf95435116fc29789d760c3d10059d3a8d3d30d2adff882f9146f0fad" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.211459 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cf449"] Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.216916 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cf449"] Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.218186 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.220525 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mlwp"] Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.222830 4904 scope.go:117] "RemoveContainer" containerID="c8377cd1e4be2464b417e4a2a043243041f97b6a2bf6467dc762e1d82db024b9" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.224133 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mlwp"] Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.238373 4904 scope.go:117] "RemoveContainer" containerID="bd52c846636280445bf4622aa3afd130953ce7fe82e423d222d6fe698ad00738" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.254235 4904 scope.go:117] "RemoveContainer" containerID="b753d60449f0fef8908b66b1f333f64a1fcbad31debb3ca2cc7a5cf8795c6431" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.275861 4904 scope.go:117] "RemoveContainer" containerID="cb34f6d4ebc97312880e54405dc5011c66d3a5c91604d0744462652e8c1e3023" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.299835 4904 scope.go:117] "RemoveContainer" containerID="200d26b535af51b11d23dba0f945b61f5c8b20b2c48daf3cb9d6721321a8f87e" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.300398 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.312803 4904 scope.go:117] "RemoveContainer" containerID="bb2992648e779ed50920e7653b5883f606c9e678933be0e56e072a7475f31f83" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.318930 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.327276 4904 scope.go:117] "RemoveContainer" containerID="ef29420b80436415ec0be25f2f51f11011887ab8c524980d67832f2ceb183096" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.348764 4904 scope.go:117] "RemoveContainer" containerID="8b142c1fa1d2de05a6f8507096e91594bac772cd8726e5dd2901170de7bb71cc" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.372427 4904 scope.go:117] "RemoveContainer" containerID="87d9189f91707c41493cb9dce58f8d4e7720f614e2a8d61125f68e2a4126b967" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.810005 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 23 10:11:14 crc kubenswrapper[4904]: I0223 10:11:14.820307 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 23 10:11:15 crc kubenswrapper[4904]: I0223 10:11:15.041115 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 23 10:11:15 crc kubenswrapper[4904]: I0223 10:11:15.265628 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ae8aff6-7d01-4352-b974-418342d434b3" path="/var/lib/kubelet/pods/2ae8aff6-7d01-4352-b974-418342d434b3/volumes" Feb 23 10:11:15 crc kubenswrapper[4904]: I0223 10:11:15.267106 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d608de8-d727-4542-b6aa-4fedda2eaa3f" path="/var/lib/kubelet/pods/2d608de8-d727-4542-b6aa-4fedda2eaa3f/volumes" Feb 23 10:11:15 crc kubenswrapper[4904]: I0223 10:11:15.267793 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46dff5e0-cb39-492e-95c6-33e67169ef87" path="/var/lib/kubelet/pods/46dff5e0-cb39-492e-95c6-33e67169ef87/volumes" Feb 23 10:11:15 crc kubenswrapper[4904]: I0223 10:11:15.269108 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74d9be42-ba8c-426b-b4b0-bad0bc65648b" path="/var/lib/kubelet/pods/74d9be42-ba8c-426b-b4b0-bad0bc65648b/volumes" Feb 23 10:11:15 crc kubenswrapper[4904]: I0223 10:11:15.269584 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e82107db-3789-4ed9-8b7a-7dc968cb833f" path="/var/lib/kubelet/pods/e82107db-3789-4ed9-8b7a-7dc968cb833f/volumes" Feb 23 10:11:15 crc kubenswrapper[4904]: I0223 10:11:15.396520 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 23 10:11:15 crc kubenswrapper[4904]: I0223 10:11:15.445254 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 23 10:11:16 crc kubenswrapper[4904]: I0223 10:11:16.074212 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 23 10:11:16 crc kubenswrapper[4904]: I0223 10:11:16.887462 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 23 10:11:16 crc kubenswrapper[4904]: I0223 10:11:16.887536 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 10:11:16 crc kubenswrapper[4904]: I0223 10:11:16.983848 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 10:11:16 crc kubenswrapper[4904]: I0223 10:11:16.983982 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:11:16 crc kubenswrapper[4904]: I0223 10:11:16.984022 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 10:11:16 crc kubenswrapper[4904]: I0223 10:11:16.984070 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 10:11:16 crc kubenswrapper[4904]: I0223 10:11:16.984128 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 10:11:16 crc kubenswrapper[4904]: I0223 10:11:16.984151 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:11:16 crc kubenswrapper[4904]: I0223 10:11:16.984152 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:11:16 crc kubenswrapper[4904]: I0223 10:11:16.984189 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:11:16 crc kubenswrapper[4904]: I0223 10:11:16.984174 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 10:11:16 crc kubenswrapper[4904]: I0223 10:11:16.984771 4904 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 23 10:11:16 crc kubenswrapper[4904]: I0223 10:11:16.984793 4904 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 23 10:11:16 crc kubenswrapper[4904]: I0223 10:11:16.984803 4904 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 23 10:11:16 crc kubenswrapper[4904]: I0223 10:11:16.984814 4904 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 23 10:11:16 crc kubenswrapper[4904]: I0223 10:11:16.995078 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:11:17 crc kubenswrapper[4904]: I0223 10:11:17.085940 4904 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 23 10:11:17 crc kubenswrapper[4904]: I0223 10:11:17.122199 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 23 10:11:17 crc kubenswrapper[4904]: I0223 10:11:17.122254 4904 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ee4a6e8c7a56b008d06625053546bc99eed1f964fb19b05ae1f4707cdba90968" exitCode=137 Feb 23 10:11:17 crc kubenswrapper[4904]: I0223 10:11:17.122301 4904 scope.go:117] "RemoveContainer" containerID="ee4a6e8c7a56b008d06625053546bc99eed1f964fb19b05ae1f4707cdba90968" Feb 23 10:11:17 crc kubenswrapper[4904]: I0223 10:11:17.122352 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 10:11:17 crc kubenswrapper[4904]: I0223 10:11:17.137408 4904 scope.go:117] "RemoveContainer" containerID="ee4a6e8c7a56b008d06625053546bc99eed1f964fb19b05ae1f4707cdba90968" Feb 23 10:11:17 crc kubenswrapper[4904]: E0223 10:11:17.137902 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee4a6e8c7a56b008d06625053546bc99eed1f964fb19b05ae1f4707cdba90968\": container with ID starting with ee4a6e8c7a56b008d06625053546bc99eed1f964fb19b05ae1f4707cdba90968 not found: ID does not exist" containerID="ee4a6e8c7a56b008d06625053546bc99eed1f964fb19b05ae1f4707cdba90968" Feb 23 10:11:17 crc kubenswrapper[4904]: I0223 10:11:17.137940 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee4a6e8c7a56b008d06625053546bc99eed1f964fb19b05ae1f4707cdba90968"} err="failed to get container status \"ee4a6e8c7a56b008d06625053546bc99eed1f964fb19b05ae1f4707cdba90968\": rpc error: code = NotFound desc = could not find container \"ee4a6e8c7a56b008d06625053546bc99eed1f964fb19b05ae1f4707cdba90968\": container with ID starting with ee4a6e8c7a56b008d06625053546bc99eed1f964fb19b05ae1f4707cdba90968 not found: ID does not exist" Feb 23 10:11:17 crc kubenswrapper[4904]: I0223 10:11:17.260997 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 23 10:11:17 crc kubenswrapper[4904]: I0223 10:11:17.261231 4904 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 23 10:11:17 crc kubenswrapper[4904]: I0223 10:11:17.271796 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 23 10:11:17 crc kubenswrapper[4904]: I0223 10:11:17.271837 4904 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="9faad24f-6aac-420b-8090-b2299d267fdb" Feb 23 10:11:17 crc kubenswrapper[4904]: I0223 10:11:17.275040 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 23 10:11:17 crc kubenswrapper[4904]: I0223 10:11:17.275068 4904 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="9faad24f-6aac-420b-8090-b2299d267fdb" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.873630 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kxhz2"] Feb 23 10:11:34 crc kubenswrapper[4904]: E0223 10:11:34.874885 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46dff5e0-cb39-492e-95c6-33e67169ef87" containerName="registry-server" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.874907 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="46dff5e0-cb39-492e-95c6-33e67169ef87" containerName="registry-server" Feb 23 10:11:34 crc kubenswrapper[4904]: E0223 10:11:34.874919 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e82107db-3789-4ed9-8b7a-7dc968cb833f" containerName="registry-server" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.874927 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e82107db-3789-4ed9-8b7a-7dc968cb833f" containerName="registry-server" Feb 23 10:11:34 crc kubenswrapper[4904]: E0223 10:11:34.874941 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d9be42-ba8c-426b-b4b0-bad0bc65648b" containerName="marketplace-operator" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.874952 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d9be42-ba8c-426b-b4b0-bad0bc65648b" containerName="marketplace-operator" Feb 23 10:11:34 crc kubenswrapper[4904]: E0223 10:11:34.874962 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d608de8-d727-4542-b6aa-4fedda2eaa3f" containerName="extract-utilities" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.874970 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d608de8-d727-4542-b6aa-4fedda2eaa3f" containerName="extract-utilities" Feb 23 10:11:34 crc kubenswrapper[4904]: E0223 10:11:34.874982 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ae8aff6-7d01-4352-b974-418342d434b3" containerName="extract-utilities" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.874989 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ae8aff6-7d01-4352-b974-418342d434b3" containerName="extract-utilities" Feb 23 10:11:34 crc kubenswrapper[4904]: E0223 10:11:34.875004 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d608de8-d727-4542-b6aa-4fedda2eaa3f" containerName="registry-server" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.875016 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d608de8-d727-4542-b6aa-4fedda2eaa3f" containerName="registry-server" Feb 23 10:11:34 crc kubenswrapper[4904]: E0223 10:11:34.875028 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e82107db-3789-4ed9-8b7a-7dc968cb833f" containerName="extract-content" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.875038 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e82107db-3789-4ed9-8b7a-7dc968cb833f" containerName="extract-content" Feb 23 10:11:34 crc kubenswrapper[4904]: E0223 10:11:34.875053 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.875063 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 23 10:11:34 crc kubenswrapper[4904]: E0223 10:11:34.875075 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46dff5e0-cb39-492e-95c6-33e67169ef87" containerName="extract-utilities" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.875083 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="46dff5e0-cb39-492e-95c6-33e67169ef87" containerName="extract-utilities" Feb 23 10:11:34 crc kubenswrapper[4904]: E0223 10:11:34.875093 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ae8aff6-7d01-4352-b974-418342d434b3" containerName="extract-content" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.875101 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ae8aff6-7d01-4352-b974-418342d434b3" containerName="extract-content" Feb 23 10:11:34 crc kubenswrapper[4904]: E0223 10:11:34.875111 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46dff5e0-cb39-492e-95c6-33e67169ef87" containerName="extract-content" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.875119 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="46dff5e0-cb39-492e-95c6-33e67169ef87" containerName="extract-content" Feb 23 10:11:34 crc kubenswrapper[4904]: E0223 10:11:34.875132 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acf67771-0a1e-4d5d-8003-5d46e639e02a" containerName="installer" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.875139 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="acf67771-0a1e-4d5d-8003-5d46e639e02a" containerName="installer" Feb 23 10:11:34 crc kubenswrapper[4904]: E0223 10:11:34.875152 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e82107db-3789-4ed9-8b7a-7dc968cb833f" containerName="extract-utilities" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.875162 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e82107db-3789-4ed9-8b7a-7dc968cb833f" containerName="extract-utilities" Feb 23 10:11:34 crc kubenswrapper[4904]: E0223 10:11:34.875173 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d608de8-d727-4542-b6aa-4fedda2eaa3f" containerName="extract-content" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.875181 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d608de8-d727-4542-b6aa-4fedda2eaa3f" containerName="extract-content" Feb 23 10:11:34 crc kubenswrapper[4904]: E0223 10:11:34.875194 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ae8aff6-7d01-4352-b974-418342d434b3" containerName="registry-server" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.875201 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ae8aff6-7d01-4352-b974-418342d434b3" containerName="registry-server" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.875332 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="46dff5e0-cb39-492e-95c6-33e67169ef87" containerName="registry-server" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.875350 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.875363 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="74d9be42-ba8c-426b-b4b0-bad0bc65648b" containerName="marketplace-operator" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.875373 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e82107db-3789-4ed9-8b7a-7dc968cb833f" containerName="registry-server" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.875383 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="acf67771-0a1e-4d5d-8003-5d46e639e02a" containerName="installer" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.875392 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ae8aff6-7d01-4352-b974-418342d434b3" containerName="registry-server" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.875402 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d608de8-d727-4542-b6aa-4fedda2eaa3f" containerName="registry-server" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.876067 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kxhz2" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.879478 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.879521 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.879565 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.879611 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.897243 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.904517 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kxhz2"] Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.915106 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw56r\" (UniqueName: \"kubernetes.io/projected/523e6ad2-ad13-42ec-8352-6451ab42c338-kube-api-access-gw56r\") pod \"marketplace-operator-79b997595-kxhz2\" (UID: \"523e6ad2-ad13-42ec-8352-6451ab42c338\") " pod="openshift-marketplace/marketplace-operator-79b997595-kxhz2" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.915191 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/523e6ad2-ad13-42ec-8352-6451ab42c338-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kxhz2\" (UID: \"523e6ad2-ad13-42ec-8352-6451ab42c338\") " pod="openshift-marketplace/marketplace-operator-79b997595-kxhz2" Feb 23 10:11:34 crc kubenswrapper[4904]: I0223 10:11:34.915280 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/523e6ad2-ad13-42ec-8352-6451ab42c338-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kxhz2\" (UID: \"523e6ad2-ad13-42ec-8352-6451ab42c338\") " pod="openshift-marketplace/marketplace-operator-79b997595-kxhz2" Feb 23 10:11:35 crc kubenswrapper[4904]: I0223 10:11:35.017183 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw56r\" (UniqueName: \"kubernetes.io/projected/523e6ad2-ad13-42ec-8352-6451ab42c338-kube-api-access-gw56r\") pod \"marketplace-operator-79b997595-kxhz2\" (UID: \"523e6ad2-ad13-42ec-8352-6451ab42c338\") " pod="openshift-marketplace/marketplace-operator-79b997595-kxhz2" Feb 23 10:11:35 crc kubenswrapper[4904]: I0223 10:11:35.017250 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/523e6ad2-ad13-42ec-8352-6451ab42c338-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kxhz2\" (UID: \"523e6ad2-ad13-42ec-8352-6451ab42c338\") " pod="openshift-marketplace/marketplace-operator-79b997595-kxhz2" Feb 23 10:11:35 crc kubenswrapper[4904]: I0223 10:11:35.017289 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/523e6ad2-ad13-42ec-8352-6451ab42c338-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kxhz2\" (UID: \"523e6ad2-ad13-42ec-8352-6451ab42c338\") " pod="openshift-marketplace/marketplace-operator-79b997595-kxhz2" Feb 23 10:11:35 crc kubenswrapper[4904]: I0223 10:11:35.018686 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/523e6ad2-ad13-42ec-8352-6451ab42c338-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kxhz2\" (UID: \"523e6ad2-ad13-42ec-8352-6451ab42c338\") " pod="openshift-marketplace/marketplace-operator-79b997595-kxhz2" Feb 23 10:11:35 crc kubenswrapper[4904]: I0223 10:11:35.030974 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/523e6ad2-ad13-42ec-8352-6451ab42c338-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kxhz2\" (UID: \"523e6ad2-ad13-42ec-8352-6451ab42c338\") " pod="openshift-marketplace/marketplace-operator-79b997595-kxhz2" Feb 23 10:11:35 crc kubenswrapper[4904]: I0223 10:11:35.040942 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw56r\" (UniqueName: \"kubernetes.io/projected/523e6ad2-ad13-42ec-8352-6451ab42c338-kube-api-access-gw56r\") pod \"marketplace-operator-79b997595-kxhz2\" (UID: \"523e6ad2-ad13-42ec-8352-6451ab42c338\") " pod="openshift-marketplace/marketplace-operator-79b997595-kxhz2" Feb 23 10:11:35 crc kubenswrapper[4904]: I0223 10:11:35.192944 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kxhz2" Feb 23 10:11:35 crc kubenswrapper[4904]: I0223 10:11:35.680703 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kxhz2"] Feb 23 10:11:36 crc kubenswrapper[4904]: I0223 10:11:36.237328 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kxhz2" event={"ID":"523e6ad2-ad13-42ec-8352-6451ab42c338","Type":"ContainerStarted","Data":"025de001faebace08b7f3924c13626c585b89eb70059f9113e83a0c554ab1fa8"} Feb 23 10:11:36 crc kubenswrapper[4904]: I0223 10:11:36.237378 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kxhz2" event={"ID":"523e6ad2-ad13-42ec-8352-6451ab42c338","Type":"ContainerStarted","Data":"f1e31926574a9225b9dfb5f8000c8c0ff0291cdff6fbbf8e53ee31d45b706287"} Feb 23 10:11:36 crc kubenswrapper[4904]: I0223 10:11:36.237608 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kxhz2" Feb 23 10:11:36 crc kubenswrapper[4904]: I0223 10:11:36.241936 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kxhz2" Feb 23 10:11:36 crc kubenswrapper[4904]: I0223 10:11:36.255197 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-kxhz2" podStartSLOduration=2.255163082 podStartE2EDuration="2.255163082s" podCreationTimestamp="2026-02-23 10:11:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:11:36.25509801 +0000 UTC m=+329.675471523" watchObservedRunningTime="2026-02-23 10:11:36.255163082 +0000 UTC m=+329.675536605" Feb 23 10:11:40 crc kubenswrapper[4904]: I0223 10:11:40.273011 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 23 10:11:40 crc kubenswrapper[4904]: I0223 10:11:40.275839 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 23 10:11:40 crc kubenswrapper[4904]: I0223 10:11:40.279004 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 23 10:11:40 crc kubenswrapper[4904]: I0223 10:11:40.279062 4904 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a9e46c1e7322d46e0702294683ea9325a55df40dee953b419647cc5df4800ad0" exitCode=137 Feb 23 10:11:40 crc kubenswrapper[4904]: I0223 10:11:40.279101 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a9e46c1e7322d46e0702294683ea9325a55df40dee953b419647cc5df4800ad0"} Feb 23 10:11:40 crc kubenswrapper[4904]: I0223 10:11:40.279134 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fb3c4e3e374014cb5bf47864200d2880b808c158cec3226a41326ac3fbef0a8b"} Feb 23 10:11:40 crc kubenswrapper[4904]: I0223 10:11:40.279158 4904 scope.go:117] "RemoveContainer" containerID="00034118ac46fa80f07637055d7140743737693c2fb6b0f4bc1924c40c19eb94" Feb 23 10:11:41 crc kubenswrapper[4904]: I0223 10:11:41.289698 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 23 10:11:41 crc kubenswrapper[4904]: I0223 10:11:41.290835 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 23 10:11:44 crc kubenswrapper[4904]: I0223 10:11:44.067076 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:11:49 crc kubenswrapper[4904]: I0223 10:11:49.628344 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:11:49 crc kubenswrapper[4904]: I0223 10:11:49.635614 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:11:50 crc kubenswrapper[4904]: I0223 10:11:50.341920 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 10:11:51 crc kubenswrapper[4904]: I0223 10:11:51.227973 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 23 10:12:02 crc kubenswrapper[4904]: I0223 10:12:02.616330 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mm4x2"] Feb 23 10:12:02 crc kubenswrapper[4904]: I0223 10:12:02.618346 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mm4x2" Feb 23 10:12:02 crc kubenswrapper[4904]: I0223 10:12:02.620631 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 23 10:12:02 crc kubenswrapper[4904]: I0223 10:12:02.628172 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mm4x2"] Feb 23 10:12:02 crc kubenswrapper[4904]: I0223 10:12:02.658413 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb3d1d55-eafd-4635-847c-d649a2c2d3e8-utilities\") pod \"certified-operators-mm4x2\" (UID: \"bb3d1d55-eafd-4635-847c-d649a2c2d3e8\") " pod="openshift-marketplace/certified-operators-mm4x2" Feb 23 10:12:02 crc kubenswrapper[4904]: I0223 10:12:02.658473 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxbz5\" (UniqueName: \"kubernetes.io/projected/bb3d1d55-eafd-4635-847c-d649a2c2d3e8-kube-api-access-vxbz5\") pod \"certified-operators-mm4x2\" (UID: \"bb3d1d55-eafd-4635-847c-d649a2c2d3e8\") " pod="openshift-marketplace/certified-operators-mm4x2" Feb 23 10:12:02 crc kubenswrapper[4904]: I0223 10:12:02.658623 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb3d1d55-eafd-4635-847c-d649a2c2d3e8-catalog-content\") pod \"certified-operators-mm4x2\" (UID: \"bb3d1d55-eafd-4635-847c-d649a2c2d3e8\") " pod="openshift-marketplace/certified-operators-mm4x2" Feb 23 10:12:02 crc kubenswrapper[4904]: I0223 10:12:02.760197 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb3d1d55-eafd-4635-847c-d649a2c2d3e8-catalog-content\") pod \"certified-operators-mm4x2\" (UID: \"bb3d1d55-eafd-4635-847c-d649a2c2d3e8\") " pod="openshift-marketplace/certified-operators-mm4x2" Feb 23 10:12:02 crc kubenswrapper[4904]: I0223 10:12:02.760314 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb3d1d55-eafd-4635-847c-d649a2c2d3e8-utilities\") pod \"certified-operators-mm4x2\" (UID: \"bb3d1d55-eafd-4635-847c-d649a2c2d3e8\") " pod="openshift-marketplace/certified-operators-mm4x2" Feb 23 10:12:02 crc kubenswrapper[4904]: I0223 10:12:02.760355 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxbz5\" (UniqueName: \"kubernetes.io/projected/bb3d1d55-eafd-4635-847c-d649a2c2d3e8-kube-api-access-vxbz5\") pod \"certified-operators-mm4x2\" (UID: \"bb3d1d55-eafd-4635-847c-d649a2c2d3e8\") " pod="openshift-marketplace/certified-operators-mm4x2" Feb 23 10:12:02 crc kubenswrapper[4904]: I0223 10:12:02.761036 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb3d1d55-eafd-4635-847c-d649a2c2d3e8-utilities\") pod \"certified-operators-mm4x2\" (UID: \"bb3d1d55-eafd-4635-847c-d649a2c2d3e8\") " pod="openshift-marketplace/certified-operators-mm4x2" Feb 23 10:12:02 crc kubenswrapper[4904]: I0223 10:12:02.761036 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb3d1d55-eafd-4635-847c-d649a2c2d3e8-catalog-content\") pod \"certified-operators-mm4x2\" (UID: \"bb3d1d55-eafd-4635-847c-d649a2c2d3e8\") " pod="openshift-marketplace/certified-operators-mm4x2" Feb 23 10:12:02 crc kubenswrapper[4904]: I0223 10:12:02.782654 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxbz5\" (UniqueName: \"kubernetes.io/projected/bb3d1d55-eafd-4635-847c-d649a2c2d3e8-kube-api-access-vxbz5\") pod \"certified-operators-mm4x2\" (UID: \"bb3d1d55-eafd-4635-847c-d649a2c2d3e8\") " pod="openshift-marketplace/certified-operators-mm4x2" Feb 23 10:12:02 crc kubenswrapper[4904]: I0223 10:12:02.942226 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mm4x2" Feb 23 10:12:03 crc kubenswrapper[4904]: I0223 10:12:03.206567 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qvnlh"] Feb 23 10:12:03 crc kubenswrapper[4904]: I0223 10:12:03.207515 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvnlh" Feb 23 10:12:03 crc kubenswrapper[4904]: I0223 10:12:03.213859 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 23 10:12:03 crc kubenswrapper[4904]: I0223 10:12:03.223241 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qvnlh"] Feb 23 10:12:03 crc kubenswrapper[4904]: I0223 10:12:03.268189 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qppjd\" (UniqueName: \"kubernetes.io/projected/280a3854-63b9-459c-9d64-b18ecf250e5a-kube-api-access-qppjd\") pod \"community-operators-qvnlh\" (UID: \"280a3854-63b9-459c-9d64-b18ecf250e5a\") " pod="openshift-marketplace/community-operators-qvnlh" Feb 23 10:12:03 crc kubenswrapper[4904]: I0223 10:12:03.268278 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/280a3854-63b9-459c-9d64-b18ecf250e5a-utilities\") pod \"community-operators-qvnlh\" (UID: \"280a3854-63b9-459c-9d64-b18ecf250e5a\") " pod="openshift-marketplace/community-operators-qvnlh" Feb 23 10:12:03 crc kubenswrapper[4904]: I0223 10:12:03.268339 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/280a3854-63b9-459c-9d64-b18ecf250e5a-catalog-content\") pod \"community-operators-qvnlh\" (UID: \"280a3854-63b9-459c-9d64-b18ecf250e5a\") " pod="openshift-marketplace/community-operators-qvnlh" Feb 23 10:12:03 crc kubenswrapper[4904]: I0223 10:12:03.370771 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qppjd\" (UniqueName: \"kubernetes.io/projected/280a3854-63b9-459c-9d64-b18ecf250e5a-kube-api-access-qppjd\") pod \"community-operators-qvnlh\" (UID: \"280a3854-63b9-459c-9d64-b18ecf250e5a\") " pod="openshift-marketplace/community-operators-qvnlh" Feb 23 10:12:03 crc kubenswrapper[4904]: I0223 10:12:03.370882 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/280a3854-63b9-459c-9d64-b18ecf250e5a-utilities\") pod \"community-operators-qvnlh\" (UID: \"280a3854-63b9-459c-9d64-b18ecf250e5a\") " pod="openshift-marketplace/community-operators-qvnlh" Feb 23 10:12:03 crc kubenswrapper[4904]: I0223 10:12:03.370946 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/280a3854-63b9-459c-9d64-b18ecf250e5a-catalog-content\") pod \"community-operators-qvnlh\" (UID: \"280a3854-63b9-459c-9d64-b18ecf250e5a\") " pod="openshift-marketplace/community-operators-qvnlh" Feb 23 10:12:03 crc kubenswrapper[4904]: I0223 10:12:03.371701 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/280a3854-63b9-459c-9d64-b18ecf250e5a-utilities\") pod \"community-operators-qvnlh\" (UID: \"280a3854-63b9-459c-9d64-b18ecf250e5a\") " pod="openshift-marketplace/community-operators-qvnlh" Feb 23 10:12:03 crc kubenswrapper[4904]: I0223 10:12:03.375221 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/280a3854-63b9-459c-9d64-b18ecf250e5a-catalog-content\") pod \"community-operators-qvnlh\" (UID: \"280a3854-63b9-459c-9d64-b18ecf250e5a\") " pod="openshift-marketplace/community-operators-qvnlh" Feb 23 10:12:03 crc kubenswrapper[4904]: I0223 10:12:03.396218 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qppjd\" (UniqueName: \"kubernetes.io/projected/280a3854-63b9-459c-9d64-b18ecf250e5a-kube-api-access-qppjd\") pod \"community-operators-qvnlh\" (UID: \"280a3854-63b9-459c-9d64-b18ecf250e5a\") " pod="openshift-marketplace/community-operators-qvnlh" Feb 23 10:12:03 crc kubenswrapper[4904]: I0223 10:12:03.458597 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mm4x2"] Feb 23 10:12:03 crc kubenswrapper[4904]: I0223 10:12:03.527826 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvnlh" Feb 23 10:12:03 crc kubenswrapper[4904]: I0223 10:12:03.740503 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qvnlh"] Feb 23 10:12:03 crc kubenswrapper[4904]: I0223 10:12:03.852245 4904 generic.go:334] "Generic (PLEG): container finished" podID="bb3d1d55-eafd-4635-847c-d649a2c2d3e8" containerID="5baed017a14dd418b3c2f0874a3e54adb9c75a610c1fcfdaf1bb30956677de0d" exitCode=0 Feb 23 10:12:03 crc kubenswrapper[4904]: I0223 10:12:03.852348 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm4x2" event={"ID":"bb3d1d55-eafd-4635-847c-d649a2c2d3e8","Type":"ContainerDied","Data":"5baed017a14dd418b3c2f0874a3e54adb9c75a610c1fcfdaf1bb30956677de0d"} Feb 23 10:12:03 crc kubenswrapper[4904]: I0223 10:12:03.852410 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm4x2" event={"ID":"bb3d1d55-eafd-4635-847c-d649a2c2d3e8","Type":"ContainerStarted","Data":"2e9e56156f964ba8f462740e2705b32a545ab31b932090762ca9e9ed279f52d6"} Feb 23 10:12:03 crc kubenswrapper[4904]: I0223 10:12:03.854223 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvnlh" event={"ID":"280a3854-63b9-459c-9d64-b18ecf250e5a","Type":"ContainerStarted","Data":"73ef5cf2a6d3a607e1df450f96917d6242e0a3586aee7696b282d5b1ecd054ab"} Feb 23 10:12:04 crc kubenswrapper[4904]: I0223 10:12:04.864750 4904 generic.go:334] "Generic (PLEG): container finished" podID="280a3854-63b9-459c-9d64-b18ecf250e5a" containerID="78483d05b48cafddedda2d538c8dcffa3e481c7d856e697d2e1849b7e3b67674" exitCode=0 Feb 23 10:12:04 crc kubenswrapper[4904]: I0223 10:12:04.864833 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvnlh" event={"ID":"280a3854-63b9-459c-9d64-b18ecf250e5a","Type":"ContainerDied","Data":"78483d05b48cafddedda2d538c8dcffa3e481c7d856e697d2e1849b7e3b67674"} Feb 23 10:12:04 crc kubenswrapper[4904]: I0223 10:12:04.868514 4904 generic.go:334] "Generic (PLEG): container finished" podID="bb3d1d55-eafd-4635-847c-d649a2c2d3e8" containerID="868bc957d95cf994a1538ace0ed3b45fce61c37f96022c2b26180b7a34528abb" exitCode=0 Feb 23 10:12:04 crc kubenswrapper[4904]: I0223 10:12:04.868564 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm4x2" event={"ID":"bb3d1d55-eafd-4635-847c-d649a2c2d3e8","Type":"ContainerDied","Data":"868bc957d95cf994a1538ace0ed3b45fce61c37f96022c2b26180b7a34528abb"} Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.017900 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6xnmt"] Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.038172 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xnmt" Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.041480 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.063365 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xnmt"] Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.194729 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59757d7b-1b5f-4019-bae0-6605e1e7a870-catalog-content\") pod \"redhat-marketplace-6xnmt\" (UID: \"59757d7b-1b5f-4019-bae0-6605e1e7a870\") " pod="openshift-marketplace/redhat-marketplace-6xnmt" Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.194812 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqkx6\" (UniqueName: \"kubernetes.io/projected/59757d7b-1b5f-4019-bae0-6605e1e7a870-kube-api-access-nqkx6\") pod \"redhat-marketplace-6xnmt\" (UID: \"59757d7b-1b5f-4019-bae0-6605e1e7a870\") " pod="openshift-marketplace/redhat-marketplace-6xnmt" Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.194852 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59757d7b-1b5f-4019-bae0-6605e1e7a870-utilities\") pod \"redhat-marketplace-6xnmt\" (UID: \"59757d7b-1b5f-4019-bae0-6605e1e7a870\") " pod="openshift-marketplace/redhat-marketplace-6xnmt" Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.296542 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59757d7b-1b5f-4019-bae0-6605e1e7a870-catalog-content\") pod \"redhat-marketplace-6xnmt\" (UID: \"59757d7b-1b5f-4019-bae0-6605e1e7a870\") " pod="openshift-marketplace/redhat-marketplace-6xnmt" Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.296629 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqkx6\" (UniqueName: \"kubernetes.io/projected/59757d7b-1b5f-4019-bae0-6605e1e7a870-kube-api-access-nqkx6\") pod \"redhat-marketplace-6xnmt\" (UID: \"59757d7b-1b5f-4019-bae0-6605e1e7a870\") " pod="openshift-marketplace/redhat-marketplace-6xnmt" Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.296673 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59757d7b-1b5f-4019-bae0-6605e1e7a870-utilities\") pod \"redhat-marketplace-6xnmt\" (UID: \"59757d7b-1b5f-4019-bae0-6605e1e7a870\") " pod="openshift-marketplace/redhat-marketplace-6xnmt" Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.297457 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59757d7b-1b5f-4019-bae0-6605e1e7a870-catalog-content\") pod \"redhat-marketplace-6xnmt\" (UID: \"59757d7b-1b5f-4019-bae0-6605e1e7a870\") " pod="openshift-marketplace/redhat-marketplace-6xnmt" Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.297482 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59757d7b-1b5f-4019-bae0-6605e1e7a870-utilities\") pod \"redhat-marketplace-6xnmt\" (UID: \"59757d7b-1b5f-4019-bae0-6605e1e7a870\") " pod="openshift-marketplace/redhat-marketplace-6xnmt" Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.320214 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqkx6\" (UniqueName: \"kubernetes.io/projected/59757d7b-1b5f-4019-bae0-6605e1e7a870-kube-api-access-nqkx6\") pod \"redhat-marketplace-6xnmt\" (UID: \"59757d7b-1b5f-4019-bae0-6605e1e7a870\") " pod="openshift-marketplace/redhat-marketplace-6xnmt" Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.367680 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xnmt" Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.624814 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mjksh"] Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.626625 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mjksh" Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.632590 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.648942 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mjksh"] Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.804767 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89jm6\" (UniqueName: \"kubernetes.io/projected/16988337-ed54-4223-87e6-7c7335b35e25-kube-api-access-89jm6\") pod \"redhat-operators-mjksh\" (UID: \"16988337-ed54-4223-87e6-7c7335b35e25\") " pod="openshift-marketplace/redhat-operators-mjksh" Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.804844 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16988337-ed54-4223-87e6-7c7335b35e25-utilities\") pod \"redhat-operators-mjksh\" (UID: \"16988337-ed54-4223-87e6-7c7335b35e25\") " pod="openshift-marketplace/redhat-operators-mjksh" Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.804881 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16988337-ed54-4223-87e6-7c7335b35e25-catalog-content\") pod \"redhat-operators-mjksh\" (UID: \"16988337-ed54-4223-87e6-7c7335b35e25\") " pod="openshift-marketplace/redhat-operators-mjksh" Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.858388 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xnmt"] Feb 23 10:12:05 crc kubenswrapper[4904]: W0223 10:12:05.868262 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59757d7b_1b5f_4019_bae0_6605e1e7a870.slice/crio-09e5f2fe99261b6e3cb7787f6e69fc8a24bc0ad5867ad5f6fe2b91a6599cfe59 WatchSource:0}: Error finding container 09e5f2fe99261b6e3cb7787f6e69fc8a24bc0ad5867ad5f6fe2b91a6599cfe59: Status 404 returned error can't find the container with id 09e5f2fe99261b6e3cb7787f6e69fc8a24bc0ad5867ad5f6fe2b91a6599cfe59 Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.876481 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvnlh" event={"ID":"280a3854-63b9-459c-9d64-b18ecf250e5a","Type":"ContainerStarted","Data":"fca88e18fd113c66695d9e291383e0ce093aab571d17cec4253de366c3715892"} Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.877501 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xnmt" event={"ID":"59757d7b-1b5f-4019-bae0-6605e1e7a870","Type":"ContainerStarted","Data":"09e5f2fe99261b6e3cb7787f6e69fc8a24bc0ad5867ad5f6fe2b91a6599cfe59"} Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.882140 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm4x2" event={"ID":"bb3d1d55-eafd-4635-847c-d649a2c2d3e8","Type":"ContainerStarted","Data":"bb2e31e0f0ba739b1684422d99760336541956d1fe262b94da9c301bc2ce6e34"} Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.906077 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89jm6\" (UniqueName: \"kubernetes.io/projected/16988337-ed54-4223-87e6-7c7335b35e25-kube-api-access-89jm6\") pod \"redhat-operators-mjksh\" (UID: \"16988337-ed54-4223-87e6-7c7335b35e25\") " pod="openshift-marketplace/redhat-operators-mjksh" Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.906656 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16988337-ed54-4223-87e6-7c7335b35e25-utilities\") pod \"redhat-operators-mjksh\" (UID: \"16988337-ed54-4223-87e6-7c7335b35e25\") " pod="openshift-marketplace/redhat-operators-mjksh" Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.906730 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16988337-ed54-4223-87e6-7c7335b35e25-catalog-content\") pod \"redhat-operators-mjksh\" (UID: \"16988337-ed54-4223-87e6-7c7335b35e25\") " pod="openshift-marketplace/redhat-operators-mjksh" Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.907302 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16988337-ed54-4223-87e6-7c7335b35e25-utilities\") pod \"redhat-operators-mjksh\" (UID: \"16988337-ed54-4223-87e6-7c7335b35e25\") " pod="openshift-marketplace/redhat-operators-mjksh" Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.907448 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16988337-ed54-4223-87e6-7c7335b35e25-catalog-content\") pod \"redhat-operators-mjksh\" (UID: \"16988337-ed54-4223-87e6-7c7335b35e25\") " pod="openshift-marketplace/redhat-operators-mjksh" Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.918282 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mm4x2" podStartSLOduration=2.5308465079999998 podStartE2EDuration="3.918261798s" podCreationTimestamp="2026-02-23 10:12:02 +0000 UTC" firstStartedPulling="2026-02-23 10:12:03.853797613 +0000 UTC m=+357.274171126" lastFinishedPulling="2026-02-23 10:12:05.241212903 +0000 UTC m=+358.661586416" observedRunningTime="2026-02-23 10:12:05.917160826 +0000 UTC m=+359.337534339" watchObservedRunningTime="2026-02-23 10:12:05.918261798 +0000 UTC m=+359.338635311" Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.929330 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89jm6\" (UniqueName: \"kubernetes.io/projected/16988337-ed54-4223-87e6-7c7335b35e25-kube-api-access-89jm6\") pod \"redhat-operators-mjksh\" (UID: \"16988337-ed54-4223-87e6-7c7335b35e25\") " pod="openshift-marketplace/redhat-operators-mjksh" Feb 23 10:12:05 crc kubenswrapper[4904]: I0223 10:12:05.967683 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mjksh" Feb 23 10:12:06 crc kubenswrapper[4904]: I0223 10:12:06.388420 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mjksh"] Feb 23 10:12:06 crc kubenswrapper[4904]: I0223 10:12:06.890695 4904 generic.go:334] "Generic (PLEG): container finished" podID="59757d7b-1b5f-4019-bae0-6605e1e7a870" containerID="ddcfe061c308305d71ec6a21cf5eff5293cde63a9d733a09738f024f6dfdd576" exitCode=0 Feb 23 10:12:06 crc kubenswrapper[4904]: I0223 10:12:06.890758 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xnmt" event={"ID":"59757d7b-1b5f-4019-bae0-6605e1e7a870","Type":"ContainerDied","Data":"ddcfe061c308305d71ec6a21cf5eff5293cde63a9d733a09738f024f6dfdd576"} Feb 23 10:12:06 crc kubenswrapper[4904]: I0223 10:12:06.893298 4904 generic.go:334] "Generic (PLEG): container finished" podID="16988337-ed54-4223-87e6-7c7335b35e25" containerID="1ea3de11747ea4cfafc9e1b45ff77a67b1d3ce044f1d4d4c1066ab299d3218ed" exitCode=0 Feb 23 10:12:06 crc kubenswrapper[4904]: I0223 10:12:06.893376 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mjksh" event={"ID":"16988337-ed54-4223-87e6-7c7335b35e25","Type":"ContainerDied","Data":"1ea3de11747ea4cfafc9e1b45ff77a67b1d3ce044f1d4d4c1066ab299d3218ed"} Feb 23 10:12:06 crc kubenswrapper[4904]: I0223 10:12:06.893418 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mjksh" event={"ID":"16988337-ed54-4223-87e6-7c7335b35e25","Type":"ContainerStarted","Data":"ec098bea6b0b7cfce8b6b67317ba7b2ff231c2cc89ceb93428f2a43cf26291eb"} Feb 23 10:12:06 crc kubenswrapper[4904]: I0223 10:12:06.895467 4904 generic.go:334] "Generic (PLEG): container finished" podID="280a3854-63b9-459c-9d64-b18ecf250e5a" containerID="fca88e18fd113c66695d9e291383e0ce093aab571d17cec4253de366c3715892" exitCode=0 Feb 23 10:12:06 crc kubenswrapper[4904]: I0223 10:12:06.895569 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvnlh" event={"ID":"280a3854-63b9-459c-9d64-b18ecf250e5a","Type":"ContainerDied","Data":"fca88e18fd113c66695d9e291383e0ce093aab571d17cec4253de366c3715892"} Feb 23 10:12:07 crc kubenswrapper[4904]: I0223 10:12:07.905429 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvnlh" event={"ID":"280a3854-63b9-459c-9d64-b18ecf250e5a","Type":"ContainerStarted","Data":"25ea0d2cb6302b623929d8dd2b85f83b0ce0f48c20eba48b0ad89f9e12cf3d98"} Feb 23 10:12:07 crc kubenswrapper[4904]: I0223 10:12:07.911903 4904 generic.go:334] "Generic (PLEG): container finished" podID="59757d7b-1b5f-4019-bae0-6605e1e7a870" containerID="0c6b01cfd3fc7b64eed6207d456bda98ac622dbc3420610244e2af6fc6cfb505" exitCode=0 Feb 23 10:12:07 crc kubenswrapper[4904]: I0223 10:12:07.912004 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xnmt" event={"ID":"59757d7b-1b5f-4019-bae0-6605e1e7a870","Type":"ContainerDied","Data":"0c6b01cfd3fc7b64eed6207d456bda98ac622dbc3420610244e2af6fc6cfb505"} Feb 23 10:12:07 crc kubenswrapper[4904]: I0223 10:12:07.941599 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qvnlh" podStartSLOduration=2.482957071 podStartE2EDuration="4.941574685s" podCreationTimestamp="2026-02-23 10:12:03 +0000 UTC" firstStartedPulling="2026-02-23 10:12:04.866232478 +0000 UTC m=+358.286605991" lastFinishedPulling="2026-02-23 10:12:07.324850072 +0000 UTC m=+360.745223605" observedRunningTime="2026-02-23 10:12:07.934332786 +0000 UTC m=+361.354706309" watchObservedRunningTime="2026-02-23 10:12:07.941574685 +0000 UTC m=+361.361948198" Feb 23 10:12:08 crc kubenswrapper[4904]: I0223 10:12:08.921619 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xnmt" event={"ID":"59757d7b-1b5f-4019-bae0-6605e1e7a870","Type":"ContainerStarted","Data":"a529399479aa40fbb06491c7560d654a838a5fa1b874c52acaf53f4af72df413"} Feb 23 10:12:08 crc kubenswrapper[4904]: I0223 10:12:08.925023 4904 generic.go:334] "Generic (PLEG): container finished" podID="16988337-ed54-4223-87e6-7c7335b35e25" containerID="fa40f3f41d371bbb992852cb632c05542fdb1a1983ce8f855ef712ad29c24d96" exitCode=0 Feb 23 10:12:08 crc kubenswrapper[4904]: I0223 10:12:08.925088 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mjksh" event={"ID":"16988337-ed54-4223-87e6-7c7335b35e25","Type":"ContainerDied","Data":"fa40f3f41d371bbb992852cb632c05542fdb1a1983ce8f855ef712ad29c24d96"} Feb 23 10:12:08 crc kubenswrapper[4904]: I0223 10:12:08.948258 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6xnmt" podStartSLOduration=3.528228023 podStartE2EDuration="4.948226973s" podCreationTimestamp="2026-02-23 10:12:04 +0000 UTC" firstStartedPulling="2026-02-23 10:12:06.892554383 +0000 UTC m=+360.312927916" lastFinishedPulling="2026-02-23 10:12:08.312553343 +0000 UTC m=+361.732926866" observedRunningTime="2026-02-23 10:12:08.946839163 +0000 UTC m=+362.367212676" watchObservedRunningTime="2026-02-23 10:12:08.948226973 +0000 UTC m=+362.368600506" Feb 23 10:12:09 crc kubenswrapper[4904]: I0223 10:12:09.932879 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mjksh" event={"ID":"16988337-ed54-4223-87e6-7c7335b35e25","Type":"ContainerStarted","Data":"b8c18f0f785319161a22d6fe95e9e5444692110e34570b90e29b1815af69c268"} Feb 23 10:12:09 crc kubenswrapper[4904]: I0223 10:12:09.956393 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mjksh" podStartSLOduration=2.439980305 podStartE2EDuration="4.956371816s" podCreationTimestamp="2026-02-23 10:12:05 +0000 UTC" firstStartedPulling="2026-02-23 10:12:06.895727654 +0000 UTC m=+360.316101167" lastFinishedPulling="2026-02-23 10:12:09.412119165 +0000 UTC m=+362.832492678" observedRunningTime="2026-02-23 10:12:09.953771161 +0000 UTC m=+363.374144674" watchObservedRunningTime="2026-02-23 10:12:09.956371816 +0000 UTC m=+363.376745349" Feb 23 10:12:12 crc kubenswrapper[4904]: I0223 10:12:12.943281 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mm4x2" Feb 23 10:12:12 crc kubenswrapper[4904]: I0223 10:12:12.943357 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mm4x2" Feb 23 10:12:13 crc kubenswrapper[4904]: I0223 10:12:13.004838 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mm4x2" Feb 23 10:12:13 crc kubenswrapper[4904]: I0223 10:12:13.528995 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qvnlh" Feb 23 10:12:13 crc kubenswrapper[4904]: I0223 10:12:13.529313 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qvnlh" Feb 23 10:12:13 crc kubenswrapper[4904]: I0223 10:12:13.576835 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qvnlh" Feb 23 10:12:14 crc kubenswrapper[4904]: I0223 10:12:14.011235 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mm4x2" Feb 23 10:12:14 crc kubenswrapper[4904]: I0223 10:12:14.024340 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qvnlh" Feb 23 10:12:15 crc kubenswrapper[4904]: I0223 10:12:15.368658 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6xnmt" Feb 23 10:12:15 crc kubenswrapper[4904]: I0223 10:12:15.369311 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6xnmt" Feb 23 10:12:15 crc kubenswrapper[4904]: I0223 10:12:15.433679 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6xnmt" Feb 23 10:12:15 crc kubenswrapper[4904]: I0223 10:12:15.968692 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mjksh" Feb 23 10:12:15 crc kubenswrapper[4904]: I0223 10:12:15.968818 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mjksh" Feb 23 10:12:16 crc kubenswrapper[4904]: I0223 10:12:16.021335 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mjksh" Feb 23 10:12:16 crc kubenswrapper[4904]: I0223 10:12:16.027971 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6xnmt" Feb 23 10:12:17 crc kubenswrapper[4904]: I0223 10:12:17.029502 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mjksh" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.491028 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dgcrs"] Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.492240 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.509149 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dgcrs"] Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.589915 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ac89e467-6f5d-4184-91b5-13fb8095ed47-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dgcrs\" (UID: \"ac89e467-6f5d-4184-91b5-13fb8095ed47\") " pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.590179 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9l89\" (UniqueName: \"kubernetes.io/projected/ac89e467-6f5d-4184-91b5-13fb8095ed47-kube-api-access-q9l89\") pod \"image-registry-66df7c8f76-dgcrs\" (UID: \"ac89e467-6f5d-4184-91b5-13fb8095ed47\") " pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.590271 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ac89e467-6f5d-4184-91b5-13fb8095ed47-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dgcrs\" (UID: \"ac89e467-6f5d-4184-91b5-13fb8095ed47\") " pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.590345 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac89e467-6f5d-4184-91b5-13fb8095ed47-bound-sa-token\") pod \"image-registry-66df7c8f76-dgcrs\" (UID: \"ac89e467-6f5d-4184-91b5-13fb8095ed47\") " pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.590421 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ac89e467-6f5d-4184-91b5-13fb8095ed47-registry-certificates\") pod \"image-registry-66df7c8f76-dgcrs\" (UID: \"ac89e467-6f5d-4184-91b5-13fb8095ed47\") " pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.590494 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ac89e467-6f5d-4184-91b5-13fb8095ed47-registry-tls\") pod \"image-registry-66df7c8f76-dgcrs\" (UID: \"ac89e467-6f5d-4184-91b5-13fb8095ed47\") " pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.590631 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac89e467-6f5d-4184-91b5-13fb8095ed47-trusted-ca\") pod \"image-registry-66df7c8f76-dgcrs\" (UID: \"ac89e467-6f5d-4184-91b5-13fb8095ed47\") " pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.590780 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-dgcrs\" (UID: \"ac89e467-6f5d-4184-91b5-13fb8095ed47\") " pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.621980 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-dgcrs\" (UID: \"ac89e467-6f5d-4184-91b5-13fb8095ed47\") " pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.692332 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ac89e467-6f5d-4184-91b5-13fb8095ed47-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dgcrs\" (UID: \"ac89e467-6f5d-4184-91b5-13fb8095ed47\") " pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.692408 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9l89\" (UniqueName: \"kubernetes.io/projected/ac89e467-6f5d-4184-91b5-13fb8095ed47-kube-api-access-q9l89\") pod \"image-registry-66df7c8f76-dgcrs\" (UID: \"ac89e467-6f5d-4184-91b5-13fb8095ed47\") " pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.692461 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ac89e467-6f5d-4184-91b5-13fb8095ed47-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dgcrs\" (UID: \"ac89e467-6f5d-4184-91b5-13fb8095ed47\") " pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.692479 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac89e467-6f5d-4184-91b5-13fb8095ed47-bound-sa-token\") pod \"image-registry-66df7c8f76-dgcrs\" (UID: \"ac89e467-6f5d-4184-91b5-13fb8095ed47\") " pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.692522 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ac89e467-6f5d-4184-91b5-13fb8095ed47-registry-certificates\") pod \"image-registry-66df7c8f76-dgcrs\" (UID: \"ac89e467-6f5d-4184-91b5-13fb8095ed47\") " pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.692547 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ac89e467-6f5d-4184-91b5-13fb8095ed47-registry-tls\") pod \"image-registry-66df7c8f76-dgcrs\" (UID: \"ac89e467-6f5d-4184-91b5-13fb8095ed47\") " pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.692591 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac89e467-6f5d-4184-91b5-13fb8095ed47-trusted-ca\") pod \"image-registry-66df7c8f76-dgcrs\" (UID: \"ac89e467-6f5d-4184-91b5-13fb8095ed47\") " pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.693179 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ac89e467-6f5d-4184-91b5-13fb8095ed47-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dgcrs\" (UID: \"ac89e467-6f5d-4184-91b5-13fb8095ed47\") " pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.693985 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac89e467-6f5d-4184-91b5-13fb8095ed47-trusted-ca\") pod \"image-registry-66df7c8f76-dgcrs\" (UID: \"ac89e467-6f5d-4184-91b5-13fb8095ed47\") " pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.694263 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ac89e467-6f5d-4184-91b5-13fb8095ed47-registry-certificates\") pod \"image-registry-66df7c8f76-dgcrs\" (UID: \"ac89e467-6f5d-4184-91b5-13fb8095ed47\") " pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.699454 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ac89e467-6f5d-4184-91b5-13fb8095ed47-registry-tls\") pod \"image-registry-66df7c8f76-dgcrs\" (UID: \"ac89e467-6f5d-4184-91b5-13fb8095ed47\") " pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.699976 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ac89e467-6f5d-4184-91b5-13fb8095ed47-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dgcrs\" (UID: \"ac89e467-6f5d-4184-91b5-13fb8095ed47\") " pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.712216 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9l89\" (UniqueName: \"kubernetes.io/projected/ac89e467-6f5d-4184-91b5-13fb8095ed47-kube-api-access-q9l89\") pod \"image-registry-66df7c8f76-dgcrs\" (UID: \"ac89e467-6f5d-4184-91b5-13fb8095ed47\") " pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.714280 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac89e467-6f5d-4184-91b5-13fb8095ed47-bound-sa-token\") pod \"image-registry-66df7c8f76-dgcrs\" (UID: \"ac89e467-6f5d-4184-91b5-13fb8095ed47\") " pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:40 crc kubenswrapper[4904]: I0223 10:12:40.819468 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:41 crc kubenswrapper[4904]: I0223 10:12:41.089765 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dgcrs"] Feb 23 10:12:41 crc kubenswrapper[4904]: I0223 10:12:41.127803 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" event={"ID":"ac89e467-6f5d-4184-91b5-13fb8095ed47","Type":"ContainerStarted","Data":"59f0318c9e149b6e7e7a04fff0e2c24167d49d2fde6136e760c54cb92ab7dfd8"} Feb 23 10:12:42 crc kubenswrapper[4904]: I0223 10:12:42.134743 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" event={"ID":"ac89e467-6f5d-4184-91b5-13fb8095ed47","Type":"ContainerStarted","Data":"b0f6e57d457b82658d3f9299140df2f43ec4eba5663c714b5f36b7b51bc6ff1b"} Feb 23 10:12:42 crc kubenswrapper[4904]: I0223 10:12:42.135157 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:12:42 crc kubenswrapper[4904]: I0223 10:12:42.160372 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" podStartSLOduration=2.160341951 podStartE2EDuration="2.160341951s" podCreationTimestamp="2026-02-23 10:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:12:42.157940713 +0000 UTC m=+395.578314246" watchObservedRunningTime="2026-02-23 10:12:42.160341951 +0000 UTC m=+395.580715464" Feb 23 10:12:47 crc kubenswrapper[4904]: I0223 10:12:47.398148 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:12:47 crc kubenswrapper[4904]: I0223 10:12:47.399353 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:13:00 crc kubenswrapper[4904]: I0223 10:13:00.828023 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-dgcrs" Feb 23 10:13:00 crc kubenswrapper[4904]: I0223 10:13:00.892605 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fkgqc"] Feb 23 10:13:17 crc kubenswrapper[4904]: I0223 10:13:17.398911 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:13:17 crc kubenswrapper[4904]: I0223 10:13:17.400131 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:13:25 crc kubenswrapper[4904]: I0223 10:13:25.974483 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" podUID="0fe2282c-11c4-4545-9301-f417bbe9dee7" containerName="registry" containerID="cri-o://c9af1dc4d88203a22bfda3f239a680effd2cd7ea2412fbf5f6a8a705b7b0a76e" gracePeriod=30 Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.340180 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.383820 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fe2282c-11c4-4545-9301-f417bbe9dee7-installation-pull-secrets\") pod \"0fe2282c-11c4-4545-9301-f417bbe9dee7\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.383906 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fe2282c-11c4-4545-9301-f417bbe9dee7-ca-trust-extracted\") pod \"0fe2282c-11c4-4545-9301-f417bbe9dee7\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.383955 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fe2282c-11c4-4545-9301-f417bbe9dee7-registry-certificates\") pod \"0fe2282c-11c4-4545-9301-f417bbe9dee7\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.384280 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"0fe2282c-11c4-4545-9301-f417bbe9dee7\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.384332 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fe2282c-11c4-4545-9301-f417bbe9dee7-trusted-ca\") pod \"0fe2282c-11c4-4545-9301-f417bbe9dee7\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.384387 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fe2282c-11c4-4545-9301-f417bbe9dee7-registry-tls\") pod \"0fe2282c-11c4-4545-9301-f417bbe9dee7\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.384436 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k22j9\" (UniqueName: \"kubernetes.io/projected/0fe2282c-11c4-4545-9301-f417bbe9dee7-kube-api-access-k22j9\") pod \"0fe2282c-11c4-4545-9301-f417bbe9dee7\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.384468 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fe2282c-11c4-4545-9301-f417bbe9dee7-bound-sa-token\") pod \"0fe2282c-11c4-4545-9301-f417bbe9dee7\" (UID: \"0fe2282c-11c4-4545-9301-f417bbe9dee7\") " Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.386635 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fe2282c-11c4-4545-9301-f417bbe9dee7-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "0fe2282c-11c4-4545-9301-f417bbe9dee7" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.387865 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fe2282c-11c4-4545-9301-f417bbe9dee7-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "0fe2282c-11c4-4545-9301-f417bbe9dee7" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.393510 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fe2282c-11c4-4545-9301-f417bbe9dee7-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "0fe2282c-11c4-4545-9301-f417bbe9dee7" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.394677 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fe2282c-11c4-4545-9301-f417bbe9dee7-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "0fe2282c-11c4-4545-9301-f417bbe9dee7" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.394716 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fe2282c-11c4-4545-9301-f417bbe9dee7-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "0fe2282c-11c4-4545-9301-f417bbe9dee7" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.397393 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fe2282c-11c4-4545-9301-f417bbe9dee7-kube-api-access-k22j9" (OuterVolumeSpecName: "kube-api-access-k22j9") pod "0fe2282c-11c4-4545-9301-f417bbe9dee7" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7"). InnerVolumeSpecName "kube-api-access-k22j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.397864 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "0fe2282c-11c4-4545-9301-f417bbe9dee7" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.409496 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fe2282c-11c4-4545-9301-f417bbe9dee7-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "0fe2282c-11c4-4545-9301-f417bbe9dee7" (UID: "0fe2282c-11c4-4545-9301-f417bbe9dee7"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.447378 4904 generic.go:334] "Generic (PLEG): container finished" podID="0fe2282c-11c4-4545-9301-f417bbe9dee7" containerID="c9af1dc4d88203a22bfda3f239a680effd2cd7ea2412fbf5f6a8a705b7b0a76e" exitCode=0 Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.447563 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.447898 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" event={"ID":"0fe2282c-11c4-4545-9301-f417bbe9dee7","Type":"ContainerDied","Data":"c9af1dc4d88203a22bfda3f239a680effd2cd7ea2412fbf5f6a8a705b7b0a76e"} Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.448073 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fkgqc" event={"ID":"0fe2282c-11c4-4545-9301-f417bbe9dee7","Type":"ContainerDied","Data":"03ec02ab76cf8c3f527af3f65ab5cf020e1f17b973143e357d247359e4dfa5bd"} Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.448123 4904 scope.go:117] "RemoveContainer" containerID="c9af1dc4d88203a22bfda3f239a680effd2cd7ea2412fbf5f6a8a705b7b0a76e" Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.474190 4904 scope.go:117] "RemoveContainer" containerID="c9af1dc4d88203a22bfda3f239a680effd2cd7ea2412fbf5f6a8a705b7b0a76e" Feb 23 10:13:26 crc kubenswrapper[4904]: E0223 10:13:26.475036 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9af1dc4d88203a22bfda3f239a680effd2cd7ea2412fbf5f6a8a705b7b0a76e\": container with ID starting with c9af1dc4d88203a22bfda3f239a680effd2cd7ea2412fbf5f6a8a705b7b0a76e not found: ID does not exist" containerID="c9af1dc4d88203a22bfda3f239a680effd2cd7ea2412fbf5f6a8a705b7b0a76e" Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.475098 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9af1dc4d88203a22bfda3f239a680effd2cd7ea2412fbf5f6a8a705b7b0a76e"} err="failed to get container status \"c9af1dc4d88203a22bfda3f239a680effd2cd7ea2412fbf5f6a8a705b7b0a76e\": rpc error: code = NotFound desc = could not find container \"c9af1dc4d88203a22bfda3f239a680effd2cd7ea2412fbf5f6a8a705b7b0a76e\": container with ID starting with c9af1dc4d88203a22bfda3f239a680effd2cd7ea2412fbf5f6a8a705b7b0a76e not found: ID does not exist" Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.488762 4904 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fe2282c-11c4-4545-9301-f417bbe9dee7-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.488834 4904 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fe2282c-11c4-4545-9301-f417bbe9dee7-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.488880 4904 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fe2282c-11c4-4545-9301-f417bbe9dee7-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.488897 4904 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fe2282c-11c4-4545-9301-f417bbe9dee7-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.488912 4904 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fe2282c-11c4-4545-9301-f417bbe9dee7-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.488928 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k22j9\" (UniqueName: \"kubernetes.io/projected/0fe2282c-11c4-4545-9301-f417bbe9dee7-kube-api-access-k22j9\") on node \"crc\" DevicePath \"\"" Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.488970 4904 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fe2282c-11c4-4545-9301-f417bbe9dee7-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.490819 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fkgqc"] Feb 23 10:13:26 crc kubenswrapper[4904]: I0223 10:13:26.503468 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fkgqc"] Feb 23 10:13:27 crc kubenswrapper[4904]: I0223 10:13:27.265993 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fe2282c-11c4-4545-9301-f417bbe9dee7" path="/var/lib/kubelet/pods/0fe2282c-11c4-4545-9301-f417bbe9dee7/volumes" Feb 23 10:13:47 crc kubenswrapper[4904]: I0223 10:13:47.398125 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:13:47 crc kubenswrapper[4904]: I0223 10:13:47.399062 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:13:47 crc kubenswrapper[4904]: I0223 10:13:47.399150 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:13:47 crc kubenswrapper[4904]: I0223 10:13:47.400208 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae646e613a51e7f064fed7847be41ac44a2747d1308e11ccf82810a5d1a00115"} pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 10:13:47 crc kubenswrapper[4904]: I0223 10:13:47.400307 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" containerID="cri-o://ae646e613a51e7f064fed7847be41ac44a2747d1308e11ccf82810a5d1a00115" gracePeriod=600 Feb 23 10:13:47 crc kubenswrapper[4904]: I0223 10:13:47.599688 4904 generic.go:334] "Generic (PLEG): container finished" podID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerID="ae646e613a51e7f064fed7847be41ac44a2747d1308e11ccf82810a5d1a00115" exitCode=0 Feb 23 10:13:47 crc kubenswrapper[4904]: I0223 10:13:47.599778 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerDied","Data":"ae646e613a51e7f064fed7847be41ac44a2747d1308e11ccf82810a5d1a00115"} Feb 23 10:13:47 crc kubenswrapper[4904]: I0223 10:13:47.599888 4904 scope.go:117] "RemoveContainer" containerID="6a16209ea55d7731b86b4c43ff27926e7432ffffeb4699ddefd5a0975754dcde" Feb 23 10:13:48 crc kubenswrapper[4904]: I0223 10:13:48.614548 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerStarted","Data":"5c2bc8b3e78a2b6bca2525dc93766b010390fb3eb8142d793d1bb25245ce12c0"} Feb 23 10:15:00 crc kubenswrapper[4904]: I0223 10:15:00.181107 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530695-x77z4"] Feb 23 10:15:00 crc kubenswrapper[4904]: E0223 10:15:00.182112 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fe2282c-11c4-4545-9301-f417bbe9dee7" containerName="registry" Feb 23 10:15:00 crc kubenswrapper[4904]: I0223 10:15:00.182129 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fe2282c-11c4-4545-9301-f417bbe9dee7" containerName="registry" Feb 23 10:15:00 crc kubenswrapper[4904]: I0223 10:15:00.182244 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fe2282c-11c4-4545-9301-f417bbe9dee7" containerName="registry" Feb 23 10:15:00 crc kubenswrapper[4904]: I0223 10:15:00.182768 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-x77z4" Feb 23 10:15:00 crc kubenswrapper[4904]: I0223 10:15:00.185229 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 10:15:00 crc kubenswrapper[4904]: I0223 10:15:00.188143 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 10:15:00 crc kubenswrapper[4904]: I0223 10:15:00.294004 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530695-x77z4"] Feb 23 10:15:00 crc kubenswrapper[4904]: I0223 10:15:00.311574 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvkhn\" (UniqueName: \"kubernetes.io/projected/19008559-f0c8-40ea-9898-0fcf1c21ef3c-kube-api-access-zvkhn\") pod \"collect-profiles-29530695-x77z4\" (UID: \"19008559-f0c8-40ea-9898-0fcf1c21ef3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-x77z4" Feb 23 10:15:00 crc kubenswrapper[4904]: I0223 10:15:00.311730 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19008559-f0c8-40ea-9898-0fcf1c21ef3c-secret-volume\") pod \"collect-profiles-29530695-x77z4\" (UID: \"19008559-f0c8-40ea-9898-0fcf1c21ef3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-x77z4" Feb 23 10:15:00 crc kubenswrapper[4904]: I0223 10:15:00.311754 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19008559-f0c8-40ea-9898-0fcf1c21ef3c-config-volume\") pod \"collect-profiles-29530695-x77z4\" (UID: \"19008559-f0c8-40ea-9898-0fcf1c21ef3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-x77z4" Feb 23 10:15:00 crc kubenswrapper[4904]: I0223 10:15:00.413109 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvkhn\" (UniqueName: \"kubernetes.io/projected/19008559-f0c8-40ea-9898-0fcf1c21ef3c-kube-api-access-zvkhn\") pod \"collect-profiles-29530695-x77z4\" (UID: \"19008559-f0c8-40ea-9898-0fcf1c21ef3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-x77z4" Feb 23 10:15:00 crc kubenswrapper[4904]: I0223 10:15:00.413262 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19008559-f0c8-40ea-9898-0fcf1c21ef3c-secret-volume\") pod \"collect-profiles-29530695-x77z4\" (UID: \"19008559-f0c8-40ea-9898-0fcf1c21ef3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-x77z4" Feb 23 10:15:00 crc kubenswrapper[4904]: I0223 10:15:00.413297 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19008559-f0c8-40ea-9898-0fcf1c21ef3c-config-volume\") pod \"collect-profiles-29530695-x77z4\" (UID: \"19008559-f0c8-40ea-9898-0fcf1c21ef3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-x77z4" Feb 23 10:15:00 crc kubenswrapper[4904]: I0223 10:15:00.414627 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19008559-f0c8-40ea-9898-0fcf1c21ef3c-config-volume\") pod \"collect-profiles-29530695-x77z4\" (UID: \"19008559-f0c8-40ea-9898-0fcf1c21ef3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-x77z4" Feb 23 10:15:00 crc kubenswrapper[4904]: I0223 10:15:00.421503 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19008559-f0c8-40ea-9898-0fcf1c21ef3c-secret-volume\") pod \"collect-profiles-29530695-x77z4\" (UID: \"19008559-f0c8-40ea-9898-0fcf1c21ef3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-x77z4" Feb 23 10:15:00 crc kubenswrapper[4904]: I0223 10:15:00.444288 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvkhn\" (UniqueName: \"kubernetes.io/projected/19008559-f0c8-40ea-9898-0fcf1c21ef3c-kube-api-access-zvkhn\") pod \"collect-profiles-29530695-x77z4\" (UID: \"19008559-f0c8-40ea-9898-0fcf1c21ef3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-x77z4" Feb 23 10:15:00 crc kubenswrapper[4904]: I0223 10:15:00.505686 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-x77z4" Feb 23 10:15:00 crc kubenswrapper[4904]: I0223 10:15:00.775891 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530695-x77z4"] Feb 23 10:15:01 crc kubenswrapper[4904]: I0223 10:15:01.135742 4904 generic.go:334] "Generic (PLEG): container finished" podID="19008559-f0c8-40ea-9898-0fcf1c21ef3c" containerID="46c86d7a497073f2c1577615e692da5a55e76e49fa839aee67ff31875e90c7ad" exitCode=0 Feb 23 10:15:01 crc kubenswrapper[4904]: I0223 10:15:01.135821 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-x77z4" event={"ID":"19008559-f0c8-40ea-9898-0fcf1c21ef3c","Type":"ContainerDied","Data":"46c86d7a497073f2c1577615e692da5a55e76e49fa839aee67ff31875e90c7ad"} Feb 23 10:15:01 crc kubenswrapper[4904]: I0223 10:15:01.135884 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-x77z4" event={"ID":"19008559-f0c8-40ea-9898-0fcf1c21ef3c","Type":"ContainerStarted","Data":"b07f13d91f68a158f5e13a474461a81e937298c8ac205acb71e6eeac25c57c6d"} Feb 23 10:15:02 crc kubenswrapper[4904]: I0223 10:15:02.433396 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-x77z4" Feb 23 10:15:02 crc kubenswrapper[4904]: I0223 10:15:02.550157 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvkhn\" (UniqueName: \"kubernetes.io/projected/19008559-f0c8-40ea-9898-0fcf1c21ef3c-kube-api-access-zvkhn\") pod \"19008559-f0c8-40ea-9898-0fcf1c21ef3c\" (UID: \"19008559-f0c8-40ea-9898-0fcf1c21ef3c\") " Feb 23 10:15:02 crc kubenswrapper[4904]: I0223 10:15:02.550439 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19008559-f0c8-40ea-9898-0fcf1c21ef3c-config-volume\") pod \"19008559-f0c8-40ea-9898-0fcf1c21ef3c\" (UID: \"19008559-f0c8-40ea-9898-0fcf1c21ef3c\") " Feb 23 10:15:02 crc kubenswrapper[4904]: I0223 10:15:02.550496 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19008559-f0c8-40ea-9898-0fcf1c21ef3c-secret-volume\") pod \"19008559-f0c8-40ea-9898-0fcf1c21ef3c\" (UID: \"19008559-f0c8-40ea-9898-0fcf1c21ef3c\") " Feb 23 10:15:02 crc kubenswrapper[4904]: I0223 10:15:02.553294 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19008559-f0c8-40ea-9898-0fcf1c21ef3c-config-volume" (OuterVolumeSpecName: "config-volume") pod "19008559-f0c8-40ea-9898-0fcf1c21ef3c" (UID: "19008559-f0c8-40ea-9898-0fcf1c21ef3c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:15:02 crc kubenswrapper[4904]: I0223 10:15:02.557880 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19008559-f0c8-40ea-9898-0fcf1c21ef3c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "19008559-f0c8-40ea-9898-0fcf1c21ef3c" (UID: "19008559-f0c8-40ea-9898-0fcf1c21ef3c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:15:02 crc kubenswrapper[4904]: I0223 10:15:02.558168 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19008559-f0c8-40ea-9898-0fcf1c21ef3c-kube-api-access-zvkhn" (OuterVolumeSpecName: "kube-api-access-zvkhn") pod "19008559-f0c8-40ea-9898-0fcf1c21ef3c" (UID: "19008559-f0c8-40ea-9898-0fcf1c21ef3c"). InnerVolumeSpecName "kube-api-access-zvkhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:15:02 crc kubenswrapper[4904]: I0223 10:15:02.652936 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvkhn\" (UniqueName: \"kubernetes.io/projected/19008559-f0c8-40ea-9898-0fcf1c21ef3c-kube-api-access-zvkhn\") on node \"crc\" DevicePath \"\"" Feb 23 10:15:02 crc kubenswrapper[4904]: I0223 10:15:02.652985 4904 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19008559-f0c8-40ea-9898-0fcf1c21ef3c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 10:15:02 crc kubenswrapper[4904]: I0223 10:15:02.653000 4904 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19008559-f0c8-40ea-9898-0fcf1c21ef3c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 10:15:03 crc kubenswrapper[4904]: I0223 10:15:03.150939 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-x77z4" event={"ID":"19008559-f0c8-40ea-9898-0fcf1c21ef3c","Type":"ContainerDied","Data":"b07f13d91f68a158f5e13a474461a81e937298c8ac205acb71e6eeac25c57c6d"} Feb 23 10:15:03 crc kubenswrapper[4904]: I0223 10:15:03.150989 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b07f13d91f68a158f5e13a474461a81e937298c8ac205acb71e6eeac25c57c6d" Feb 23 10:15:03 crc kubenswrapper[4904]: I0223 10:15:03.151066 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-x77z4" Feb 23 10:15:47 crc kubenswrapper[4904]: I0223 10:15:47.398671 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:15:47 crc kubenswrapper[4904]: I0223 10:15:47.399307 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.291699 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-7f5jm"] Feb 23 10:16:12 crc kubenswrapper[4904]: E0223 10:16:12.292493 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19008559-f0c8-40ea-9898-0fcf1c21ef3c" containerName="collect-profiles" Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.292508 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="19008559-f0c8-40ea-9898-0fcf1c21ef3c" containerName="collect-profiles" Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.292622 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="19008559-f0c8-40ea-9898-0fcf1c21ef3c" containerName="collect-profiles" Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.293086 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7f5jm" Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.295881 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.296962 4904 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-xvs95" Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.297856 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.300962 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-kmtx9"] Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.301735 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-kmtx9" Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.304784 4904 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-29bh7" Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.306309 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-7f5jm"] Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.327901 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-kmtx9"] Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.337669 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dr7zn"] Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.338963 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-dr7zn" Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.343826 4904 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-shjnm" Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.351272 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dr7zn"] Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.367292 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwdjq\" (UniqueName: \"kubernetes.io/projected/5f2bc9ed-640c-4d77-b4c1-996b8adc337f-kube-api-access-jwdjq\") pod \"cert-manager-858654f9db-kmtx9\" (UID: \"5f2bc9ed-640c-4d77-b4c1-996b8adc337f\") " pod="cert-manager/cert-manager-858654f9db-kmtx9" Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.367406 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tggkc\" (UniqueName: \"kubernetes.io/projected/048cc369-9691-4a4c-9140-8ab1aa5e1cca-kube-api-access-tggkc\") pod \"cert-manager-webhook-687f57d79b-dr7zn\" (UID: \"048cc369-9691-4a4c-9140-8ab1aa5e1cca\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dr7zn" Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.367461 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn4r9\" (UniqueName: \"kubernetes.io/projected/984f83b9-1f07-4198-af7f-c93cdb296e75-kube-api-access-cn4r9\") pod \"cert-manager-cainjector-cf98fcc89-7f5jm\" (UID: \"984f83b9-1f07-4198-af7f-c93cdb296e75\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-7f5jm" Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.468830 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwdjq\" (UniqueName: \"kubernetes.io/projected/5f2bc9ed-640c-4d77-b4c1-996b8adc337f-kube-api-access-jwdjq\") pod \"cert-manager-858654f9db-kmtx9\" (UID: \"5f2bc9ed-640c-4d77-b4c1-996b8adc337f\") " pod="cert-manager/cert-manager-858654f9db-kmtx9" Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.469164 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tggkc\" (UniqueName: \"kubernetes.io/projected/048cc369-9691-4a4c-9140-8ab1aa5e1cca-kube-api-access-tggkc\") pod \"cert-manager-webhook-687f57d79b-dr7zn\" (UID: \"048cc369-9691-4a4c-9140-8ab1aa5e1cca\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dr7zn" Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.469303 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn4r9\" (UniqueName: \"kubernetes.io/projected/984f83b9-1f07-4198-af7f-c93cdb296e75-kube-api-access-cn4r9\") pod \"cert-manager-cainjector-cf98fcc89-7f5jm\" (UID: \"984f83b9-1f07-4198-af7f-c93cdb296e75\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-7f5jm" Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.488915 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn4r9\" (UniqueName: \"kubernetes.io/projected/984f83b9-1f07-4198-af7f-c93cdb296e75-kube-api-access-cn4r9\") pod \"cert-manager-cainjector-cf98fcc89-7f5jm\" (UID: \"984f83b9-1f07-4198-af7f-c93cdb296e75\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-7f5jm" Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.490359 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwdjq\" (UniqueName: \"kubernetes.io/projected/5f2bc9ed-640c-4d77-b4c1-996b8adc337f-kube-api-access-jwdjq\") pod \"cert-manager-858654f9db-kmtx9\" (UID: \"5f2bc9ed-640c-4d77-b4c1-996b8adc337f\") " pod="cert-manager/cert-manager-858654f9db-kmtx9" Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.491317 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tggkc\" (UniqueName: \"kubernetes.io/projected/048cc369-9691-4a4c-9140-8ab1aa5e1cca-kube-api-access-tggkc\") pod \"cert-manager-webhook-687f57d79b-dr7zn\" (UID: \"048cc369-9691-4a4c-9140-8ab1aa5e1cca\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dr7zn" Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.608344 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7f5jm" Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.616752 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-kmtx9" Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.654458 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-dr7zn" Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.822551 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-kmtx9"] Feb 23 10:16:12 crc kubenswrapper[4904]: I0223 10:16:12.832630 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 10:16:13 crc kubenswrapper[4904]: I0223 10:16:13.087368 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-7f5jm"] Feb 23 10:16:13 crc kubenswrapper[4904]: W0223 10:16:13.091120 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod984f83b9_1f07_4198_af7f_c93cdb296e75.slice/crio-3d0da6e96c35fd1198170586588e11dee6c3d14d446301cc7fb2faf64a106df3 WatchSource:0}: Error finding container 3d0da6e96c35fd1198170586588e11dee6c3d14d446301cc7fb2faf64a106df3: Status 404 returned error can't find the container with id 3d0da6e96c35fd1198170586588e11dee6c3d14d446301cc7fb2faf64a106df3 Feb 23 10:16:13 crc kubenswrapper[4904]: I0223 10:16:13.095156 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dr7zn"] Feb 23 10:16:13 crc kubenswrapper[4904]: I0223 10:16:13.193510 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7f5jm" event={"ID":"984f83b9-1f07-4198-af7f-c93cdb296e75","Type":"ContainerStarted","Data":"3d0da6e96c35fd1198170586588e11dee6c3d14d446301cc7fb2faf64a106df3"} Feb 23 10:16:13 crc kubenswrapper[4904]: I0223 10:16:13.195025 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-dr7zn" event={"ID":"048cc369-9691-4a4c-9140-8ab1aa5e1cca","Type":"ContainerStarted","Data":"f309fe64feb76615901635b94e97760ee9a7ef123ab6d84ceed8a49e8936653a"} Feb 23 10:16:13 crc kubenswrapper[4904]: I0223 10:16:13.196210 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-kmtx9" event={"ID":"5f2bc9ed-640c-4d77-b4c1-996b8adc337f","Type":"ContainerStarted","Data":"bda93c187659f517406e3889f8d5a7cb3522c37f08f928c79b3c81e764964602"} Feb 23 10:16:17 crc kubenswrapper[4904]: I0223 10:16:17.218944 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-dr7zn" event={"ID":"048cc369-9691-4a4c-9140-8ab1aa5e1cca","Type":"ContainerStarted","Data":"61c1c38fc53f86d3571e4b44e0d643f711fe7da6b696b42e031f4552f98176b0"} Feb 23 10:16:17 crc kubenswrapper[4904]: I0223 10:16:17.219476 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-dr7zn" Feb 23 10:16:17 crc kubenswrapper[4904]: I0223 10:16:17.220956 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-kmtx9" event={"ID":"5f2bc9ed-640c-4d77-b4c1-996b8adc337f","Type":"ContainerStarted","Data":"0bd960a852aa916a04aa5856fc042aa101519025355788a43e90f762162dced5"} Feb 23 10:16:17 crc kubenswrapper[4904]: I0223 10:16:17.223316 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7f5jm" event={"ID":"984f83b9-1f07-4198-af7f-c93cdb296e75","Type":"ContainerStarted","Data":"152aae4680cefb667f2558a2255c58833b02e4e865ea5b3e2e365ba8e7ba3f66"} Feb 23 10:16:17 crc kubenswrapper[4904]: I0223 10:16:17.240226 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-dr7zn" podStartSLOduration=1.663574316 podStartE2EDuration="5.240211448s" podCreationTimestamp="2026-02-23 10:16:12 +0000 UTC" firstStartedPulling="2026-02-23 10:16:13.107841385 +0000 UTC m=+606.528214898" lastFinishedPulling="2026-02-23 10:16:16.684478517 +0000 UTC m=+610.104852030" observedRunningTime="2026-02-23 10:16:17.236488343 +0000 UTC m=+610.656861876" watchObservedRunningTime="2026-02-23 10:16:17.240211448 +0000 UTC m=+610.660584961" Feb 23 10:16:17 crc kubenswrapper[4904]: I0223 10:16:17.284815 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-kmtx9" podStartSLOduration=1.516359122 podStartE2EDuration="5.284790862s" podCreationTimestamp="2026-02-23 10:16:12 +0000 UTC" firstStartedPulling="2026-02-23 10:16:12.832270519 +0000 UTC m=+606.252644042" lastFinishedPulling="2026-02-23 10:16:16.600702269 +0000 UTC m=+610.021075782" observedRunningTime="2026-02-23 10:16:17.25663366 +0000 UTC m=+610.677007173" watchObservedRunningTime="2026-02-23 10:16:17.284790862 +0000 UTC m=+610.705164395" Feb 23 10:16:17 crc kubenswrapper[4904]: I0223 10:16:17.287378 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7f5jm" podStartSLOduration=3.132437716 podStartE2EDuration="5.287346234s" podCreationTimestamp="2026-02-23 10:16:12 +0000 UTC" firstStartedPulling="2026-02-23 10:16:13.093380908 +0000 UTC m=+606.513754421" lastFinishedPulling="2026-02-23 10:16:15.248289426 +0000 UTC m=+608.668662939" observedRunningTime="2026-02-23 10:16:17.282493288 +0000 UTC m=+610.702866801" watchObservedRunningTime="2026-02-23 10:16:17.287346234 +0000 UTC m=+610.707719787" Feb 23 10:16:17 crc kubenswrapper[4904]: I0223 10:16:17.398405 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:16:17 crc kubenswrapper[4904]: I0223 10:16:17.398741 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.367575 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9h7jb"] Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.368454 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="ovn-controller" containerID="cri-o://ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d" gracePeriod=30 Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.368518 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="nbdb" containerID="cri-o://9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281" gracePeriod=30 Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.368625 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="ovn-acl-logging" containerID="cri-o://2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d" gracePeriod=30 Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.368690 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="northd" containerID="cri-o://71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63" gracePeriod=30 Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.368787 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="sbdb" containerID="cri-o://6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d" gracePeriod=30 Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.368598 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b" gracePeriod=30 Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.368908 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="kube-rbac-proxy-node" containerID="cri-o://fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3" gracePeriod=30 Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.417414 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="ovnkube-controller" containerID="cri-o://2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103" gracePeriod=30 Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.656921 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-dr7zn" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.658054 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9h7jb_0acf61bd-42c5-4566-ac29-815afead2012/ovnkube-controller/2.log" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.660053 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9h7jb_0acf61bd-42c5-4566-ac29-815afead2012/ovn-acl-logging/0.log" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.660612 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9h7jb_0acf61bd-42c5-4566-ac29-815afead2012/ovn-controller/0.log" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.661055 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.704894 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-cni-netd\") pod \"0acf61bd-42c5-4566-ac29-815afead2012\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.704940 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0acf61bd-42c5-4566-ac29-815afead2012-ovnkube-script-lib\") pod \"0acf61bd-42c5-4566-ac29-815afead2012\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.704961 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-etc-openvswitch\") pod \"0acf61bd-42c5-4566-ac29-815afead2012\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.704982 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-run-netns\") pod \"0acf61bd-42c5-4566-ac29-815afead2012\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705004 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-log-socket\") pod \"0acf61bd-42c5-4566-ac29-815afead2012\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705019 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0acf61bd-42c5-4566-ac29-815afead2012-env-overrides\") pod \"0acf61bd-42c5-4566-ac29-815afead2012\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705045 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-node-log\") pod \"0acf61bd-42c5-4566-ac29-815afead2012\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705058 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-cni-bin\") pod \"0acf61bd-42c5-4566-ac29-815afead2012\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705073 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-systemd-units\") pod \"0acf61bd-42c5-4566-ac29-815afead2012\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705102 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-var-lib-openvswitch\") pod \"0acf61bd-42c5-4566-ac29-815afead2012\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705105 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "0acf61bd-42c5-4566-ac29-815afead2012" (UID: "0acf61bd-42c5-4566-ac29-815afead2012"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705121 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-slash\") pod \"0acf61bd-42c5-4566-ac29-815afead2012\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705145 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-slash" (OuterVolumeSpecName: "host-slash") pod "0acf61bd-42c5-4566-ac29-815afead2012" (UID: "0acf61bd-42c5-4566-ac29-815afead2012"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705169 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-run-openvswitch\") pod \"0acf61bd-42c5-4566-ac29-815afead2012\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705192 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-kubelet\") pod \"0acf61bd-42c5-4566-ac29-815afead2012\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705221 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trdnv\" (UniqueName: \"kubernetes.io/projected/0acf61bd-42c5-4566-ac29-815afead2012-kube-api-access-trdnv\") pod \"0acf61bd-42c5-4566-ac29-815afead2012\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705246 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0acf61bd-42c5-4566-ac29-815afead2012-ovn-node-metrics-cert\") pod \"0acf61bd-42c5-4566-ac29-815afead2012\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705268 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0acf61bd-42c5-4566-ac29-815afead2012-ovnkube-config\") pod \"0acf61bd-42c5-4566-ac29-815afead2012\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705301 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-run-ovn\") pod \"0acf61bd-42c5-4566-ac29-815afead2012\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705325 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-var-lib-cni-networks-ovn-kubernetes\") pod \"0acf61bd-42c5-4566-ac29-815afead2012\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705349 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-run-systemd\") pod \"0acf61bd-42c5-4566-ac29-815afead2012\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705365 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-run-ovn-kubernetes\") pod \"0acf61bd-42c5-4566-ac29-815afead2012\" (UID: \"0acf61bd-42c5-4566-ac29-815afead2012\") " Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705505 4904 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705516 4904 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-slash\") on node \"crc\" DevicePath \"\"" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705620 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0acf61bd-42c5-4566-ac29-815afead2012-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "0acf61bd-42c5-4566-ac29-815afead2012" (UID: "0acf61bd-42c5-4566-ac29-815afead2012"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705652 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "0acf61bd-42c5-4566-ac29-815afead2012" (UID: "0acf61bd-42c5-4566-ac29-815afead2012"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705672 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "0acf61bd-42c5-4566-ac29-815afead2012" (UID: "0acf61bd-42c5-4566-ac29-815afead2012"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705690 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-log-socket" (OuterVolumeSpecName: "log-socket") pod "0acf61bd-42c5-4566-ac29-815afead2012" (UID: "0acf61bd-42c5-4566-ac29-815afead2012"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705819 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "0acf61bd-42c5-4566-ac29-815afead2012" (UID: "0acf61bd-42c5-4566-ac29-815afead2012"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705859 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-node-log" (OuterVolumeSpecName: "node-log") pod "0acf61bd-42c5-4566-ac29-815afead2012" (UID: "0acf61bd-42c5-4566-ac29-815afead2012"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705887 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "0acf61bd-42c5-4566-ac29-815afead2012" (UID: "0acf61bd-42c5-4566-ac29-815afead2012"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705912 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "0acf61bd-42c5-4566-ac29-815afead2012" (UID: "0acf61bd-42c5-4566-ac29-815afead2012"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705935 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "0acf61bd-42c5-4566-ac29-815afead2012" (UID: "0acf61bd-42c5-4566-ac29-815afead2012"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705957 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "0acf61bd-42c5-4566-ac29-815afead2012" (UID: "0acf61bd-42c5-4566-ac29-815afead2012"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.705983 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "0acf61bd-42c5-4566-ac29-815afead2012" (UID: "0acf61bd-42c5-4566-ac29-815afead2012"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.706005 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "0acf61bd-42c5-4566-ac29-815afead2012" (UID: "0acf61bd-42c5-4566-ac29-815afead2012"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.706024 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0acf61bd-42c5-4566-ac29-815afead2012-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "0acf61bd-42c5-4566-ac29-815afead2012" (UID: "0acf61bd-42c5-4566-ac29-815afead2012"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.706233 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0acf61bd-42c5-4566-ac29-815afead2012-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "0acf61bd-42c5-4566-ac29-815afead2012" (UID: "0acf61bd-42c5-4566-ac29-815afead2012"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.706292 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "0acf61bd-42c5-4566-ac29-815afead2012" (UID: "0acf61bd-42c5-4566-ac29-815afead2012"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.712035 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0acf61bd-42c5-4566-ac29-815afead2012-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "0acf61bd-42c5-4566-ac29-815afead2012" (UID: "0acf61bd-42c5-4566-ac29-815afead2012"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.712917 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0acf61bd-42c5-4566-ac29-815afead2012-kube-api-access-trdnv" (OuterVolumeSpecName: "kube-api-access-trdnv") pod "0acf61bd-42c5-4566-ac29-815afead2012" (UID: "0acf61bd-42c5-4566-ac29-815afead2012"). InnerVolumeSpecName "kube-api-access-trdnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.736265 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "0acf61bd-42c5-4566-ac29-815afead2012" (UID: "0acf61bd-42c5-4566-ac29-815afead2012"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.738529 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-d7rgp"] Feb 23 10:16:22 crc kubenswrapper[4904]: E0223 10:16:22.738871 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="sbdb" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.738903 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="sbdb" Feb 23 10:16:22 crc kubenswrapper[4904]: E0223 10:16:22.738919 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="kubecfg-setup" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.738930 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="kubecfg-setup" Feb 23 10:16:22 crc kubenswrapper[4904]: E0223 10:16:22.738938 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="ovn-acl-logging" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.738946 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="ovn-acl-logging" Feb 23 10:16:22 crc kubenswrapper[4904]: E0223 10:16:22.738957 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="ovnkube-controller" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.738965 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="ovnkube-controller" Feb 23 10:16:22 crc kubenswrapper[4904]: E0223 10:16:22.738976 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="northd" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.738984 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="northd" Feb 23 10:16:22 crc kubenswrapper[4904]: E0223 10:16:22.738998 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="nbdb" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.739006 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="nbdb" Feb 23 10:16:22 crc kubenswrapper[4904]: E0223 10:16:22.739021 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="ovn-controller" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.739028 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="ovn-controller" Feb 23 10:16:22 crc kubenswrapper[4904]: E0223 10:16:22.739037 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="ovnkube-controller" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.739046 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="ovnkube-controller" Feb 23 10:16:22 crc kubenswrapper[4904]: E0223 10:16:22.739059 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="kube-rbac-proxy-node" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.739066 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="kube-rbac-proxy-node" Feb 23 10:16:22 crc kubenswrapper[4904]: E0223 10:16:22.739077 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="ovnkube-controller" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.739087 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="ovnkube-controller" Feb 23 10:16:22 crc kubenswrapper[4904]: E0223 10:16:22.739100 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.739108 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.739207 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="sbdb" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.739222 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="ovnkube-controller" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.739233 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="ovn-acl-logging" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.739241 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="kube-rbac-proxy-node" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.739251 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="nbdb" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.739261 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="ovnkube-controller" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.739267 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.739275 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="northd" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.739281 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="ovn-controller" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.739288 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="ovnkube-controller" Feb 23 10:16:22 crc kubenswrapper[4904]: E0223 10:16:22.739392 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="ovnkube-controller" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.739400 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="ovnkube-controller" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.739492 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0acf61bd-42c5-4566-ac29-815afead2012" containerName="ovnkube-controller" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.742530 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.806957 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-host-kubelet\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807078 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-host-run-ovn-kubernetes\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807105 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4010244d-6472-4015-9951-4aacf4f0d769-ovn-node-metrics-cert\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807191 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-run-systemd\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807249 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4010244d-6472-4015-9951-4aacf4f0d769-ovnkube-config\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807313 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4010244d-6472-4015-9951-4aacf4f0d769-ovnkube-script-lib\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807343 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-host-run-netns\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807375 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-var-lib-openvswitch\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807426 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-host-cni-netd\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807461 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4010244d-6472-4015-9951-4aacf4f0d769-env-overrides\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807483 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-host-cni-bin\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807514 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807540 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-etc-openvswitch\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807564 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-host-slash\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807637 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-systemd-units\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807671 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbgb8\" (UniqueName: \"kubernetes.io/projected/4010244d-6472-4015-9951-4aacf4f0d769-kube-api-access-bbgb8\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807700 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-node-log\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807741 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-log-socket\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807760 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-run-openvswitch\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807777 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-run-ovn\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807837 4904 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807850 4904 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-log-socket\") on node \"crc\" DevicePath \"\"" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807861 4904 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0acf61bd-42c5-4566-ac29-815afead2012-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807870 4904 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-node-log\") on node \"crc\" DevicePath \"\"" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807878 4904 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807889 4904 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807898 4904 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807908 4904 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807916 4904 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807924 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trdnv\" (UniqueName: \"kubernetes.io/projected/0acf61bd-42c5-4566-ac29-815afead2012-kube-api-access-trdnv\") on node \"crc\" DevicePath \"\"" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807934 4904 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0acf61bd-42c5-4566-ac29-815afead2012-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807943 4904 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0acf61bd-42c5-4566-ac29-815afead2012-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807952 4904 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807963 4904 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807975 4904 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.807992 4904 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.808009 4904 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0acf61bd-42c5-4566-ac29-815afead2012-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.808020 4904 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0acf61bd-42c5-4566-ac29-815afead2012-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909057 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-host-kubelet\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909112 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-host-run-ovn-kubernetes\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909133 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4010244d-6472-4015-9951-4aacf4f0d769-ovn-node-metrics-cert\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909156 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-run-systemd\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909175 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4010244d-6472-4015-9951-4aacf4f0d769-ovnkube-config\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909208 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4010244d-6472-4015-9951-4aacf4f0d769-ovnkube-script-lib\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909232 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-host-run-netns\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909247 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-var-lib-openvswitch\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909266 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-host-cni-netd\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909287 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4010244d-6472-4015-9951-4aacf4f0d769-env-overrides\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909338 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-host-cni-bin\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909363 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909384 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-etc-openvswitch\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909400 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-host-slash\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909442 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-systemd-units\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909465 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbgb8\" (UniqueName: \"kubernetes.io/projected/4010244d-6472-4015-9951-4aacf4f0d769-kube-api-access-bbgb8\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909482 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-node-log\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909502 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-log-socket\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909519 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-run-openvswitch\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909537 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-run-ovn\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909613 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-run-ovn\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909653 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-host-kubelet\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909676 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-host-run-ovn-kubernetes\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909882 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-host-cni-bin\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909941 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-var-lib-openvswitch\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.909982 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-host-cni-netd\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.910022 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-host-run-netns\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.910125 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-run-systemd\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.910149 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-systemd-units\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.910280 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.910356 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-host-slash\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.910427 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-node-log\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.910499 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-log-socket\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.910528 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-run-openvswitch\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.910567 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4010244d-6472-4015-9951-4aacf4f0d769-env-overrides\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.910990 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4010244d-6472-4015-9951-4aacf4f0d769-ovnkube-config\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.911074 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4010244d-6472-4015-9951-4aacf4f0d769-etc-openvswitch\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.911120 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4010244d-6472-4015-9951-4aacf4f0d769-ovnkube-script-lib\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.913367 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4010244d-6472-4015-9951-4aacf4f0d769-ovn-node-metrics-cert\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:22 crc kubenswrapper[4904]: I0223 10:16:22.928513 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbgb8\" (UniqueName: \"kubernetes.io/projected/4010244d-6472-4015-9951-4aacf4f0d769-kube-api-access-bbgb8\") pod \"ovnkube-node-d7rgp\" (UID: \"4010244d-6472-4015-9951-4aacf4f0d769\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.059566 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.260882 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9h7jb_0acf61bd-42c5-4566-ac29-815afead2012/ovnkube-controller/2.log" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.263109 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9h7jb_0acf61bd-42c5-4566-ac29-815afead2012/ovn-acl-logging/0.log" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.263539 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9h7jb_0acf61bd-42c5-4566-ac29-815afead2012/ovn-controller/0.log" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.263872 4904 generic.go:334] "Generic (PLEG): container finished" podID="0acf61bd-42c5-4566-ac29-815afead2012" containerID="2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103" exitCode=0 Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.263892 4904 generic.go:334] "Generic (PLEG): container finished" podID="0acf61bd-42c5-4566-ac29-815afead2012" containerID="6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d" exitCode=0 Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.263900 4904 generic.go:334] "Generic (PLEG): container finished" podID="0acf61bd-42c5-4566-ac29-815afead2012" containerID="9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281" exitCode=0 Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.263909 4904 generic.go:334] "Generic (PLEG): container finished" podID="0acf61bd-42c5-4566-ac29-815afead2012" containerID="71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63" exitCode=0 Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.263915 4904 generic.go:334] "Generic (PLEG): container finished" podID="0acf61bd-42c5-4566-ac29-815afead2012" containerID="db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b" exitCode=0 Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.263923 4904 generic.go:334] "Generic (PLEG): container finished" podID="0acf61bd-42c5-4566-ac29-815afead2012" containerID="fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3" exitCode=0 Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.263928 4904 generic.go:334] "Generic (PLEG): container finished" podID="0acf61bd-42c5-4566-ac29-815afead2012" containerID="2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d" exitCode=143 Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.263936 4904 generic.go:334] "Generic (PLEG): container finished" podID="0acf61bd-42c5-4566-ac29-815afead2012" containerID="ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d" exitCode=143 Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.263971 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" event={"ID":"0acf61bd-42c5-4566-ac29-815afead2012","Type":"ContainerDied","Data":"2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.263996 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" event={"ID":"0acf61bd-42c5-4566-ac29-815afead2012","Type":"ContainerDied","Data":"6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264007 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" event={"ID":"0acf61bd-42c5-4566-ac29-815afead2012","Type":"ContainerDied","Data":"9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264016 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" event={"ID":"0acf61bd-42c5-4566-ac29-815afead2012","Type":"ContainerDied","Data":"71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264025 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" event={"ID":"0acf61bd-42c5-4566-ac29-815afead2012","Type":"ContainerDied","Data":"db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264035 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" event={"ID":"0acf61bd-42c5-4566-ac29-815afead2012","Type":"ContainerDied","Data":"fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264045 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264054 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264060 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264065 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264070 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264075 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264080 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264085 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264091 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264098 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" event={"ID":"0acf61bd-42c5-4566-ac29-815afead2012","Type":"ContainerDied","Data":"2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264106 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264112 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264117 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264122 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264127 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264133 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264138 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264143 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264149 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264153 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264160 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" event={"ID":"0acf61bd-42c5-4566-ac29-815afead2012","Type":"ContainerDied","Data":"ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264167 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264173 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264180 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264185 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264191 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264197 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264203 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264214 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264223 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264229 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264239 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" event={"ID":"0acf61bd-42c5-4566-ac29-815afead2012","Type":"ContainerDied","Data":"a4446c01fc69dfa03d531f75c23b6b5ef208d019b5c1c78b2e551a4518eba6ad"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264251 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264259 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264266 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264273 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264279 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264284 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264290 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264296 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264301 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264305 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264318 4904 scope.go:117] "RemoveContainer" containerID="2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.264456 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9h7jb" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.270432 4904 generic.go:334] "Generic (PLEG): container finished" podID="4010244d-6472-4015-9951-4aacf4f0d769" containerID="a030c693c365c8e05e65d875c7ec7c5bcebb1e6752d9532ac97ebf1aeab09c74" exitCode=0 Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.270483 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" event={"ID":"4010244d-6472-4015-9951-4aacf4f0d769","Type":"ContainerDied","Data":"a030c693c365c8e05e65d875c7ec7c5bcebb1e6752d9532ac97ebf1aeab09c74"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.270548 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" event={"ID":"4010244d-6472-4015-9951-4aacf4f0d769","Type":"ContainerStarted","Data":"dad7921f4e9a346983b4d3a8457e7bf4e661bfb10cbdb85135cfdae6e4fc5f4b"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.272257 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fm2n2_65ad73a3-cf4b-49ec-b994-2d52cb43bc76/kube-multus/1.log" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.272743 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fm2n2_65ad73a3-cf4b-49ec-b994-2d52cb43bc76/kube-multus/0.log" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.272775 4904 generic.go:334] "Generic (PLEG): container finished" podID="65ad73a3-cf4b-49ec-b994-2d52cb43bc76" containerID="152bdc6379dd6bc50cffe55466797d4f53ac52eeb68ca86ce1e5e4b6ef052b83" exitCode=2 Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.272804 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fm2n2" event={"ID":"65ad73a3-cf4b-49ec-b994-2d52cb43bc76","Type":"ContainerDied","Data":"152bdc6379dd6bc50cffe55466797d4f53ac52eeb68ca86ce1e5e4b6ef052b83"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.272822 4904 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42"} Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.273184 4904 scope.go:117] "RemoveContainer" containerID="152bdc6379dd6bc50cffe55466797d4f53ac52eeb68ca86ce1e5e4b6ef052b83" Feb 23 10:16:23 crc kubenswrapper[4904]: E0223 10:16:23.273370 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-fm2n2_openshift-multus(65ad73a3-cf4b-49ec-b994-2d52cb43bc76)\"" pod="openshift-multus/multus-fm2n2" podUID="65ad73a3-cf4b-49ec-b994-2d52cb43bc76" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.300304 4904 scope.go:117] "RemoveContainer" containerID="94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.323507 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9h7jb"] Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.327822 4904 scope.go:117] "RemoveContainer" containerID="6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.328483 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9h7jb"] Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.359649 4904 scope.go:117] "RemoveContainer" containerID="9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.373109 4904 scope.go:117] "RemoveContainer" containerID="71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.389652 4904 scope.go:117] "RemoveContainer" containerID="db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.403995 4904 scope.go:117] "RemoveContainer" containerID="fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.421014 4904 scope.go:117] "RemoveContainer" containerID="2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.435000 4904 scope.go:117] "RemoveContainer" containerID="ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.452954 4904 scope.go:117] "RemoveContainer" containerID="30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.467198 4904 scope.go:117] "RemoveContainer" containerID="2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103" Feb 23 10:16:23 crc kubenswrapper[4904]: E0223 10:16:23.467581 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103\": container with ID starting with 2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103 not found: ID does not exist" containerID="2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.467615 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103"} err="failed to get container status \"2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103\": rpc error: code = NotFound desc = could not find container \"2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103\": container with ID starting with 2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103 not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.467635 4904 scope.go:117] "RemoveContainer" containerID="94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7" Feb 23 10:16:23 crc kubenswrapper[4904]: E0223 10:16:23.467891 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7\": container with ID starting with 94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7 not found: ID does not exist" containerID="94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.467917 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7"} err="failed to get container status \"94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7\": rpc error: code = NotFound desc = could not find container \"94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7\": container with ID starting with 94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7 not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.467969 4904 scope.go:117] "RemoveContainer" containerID="6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d" Feb 23 10:16:23 crc kubenswrapper[4904]: E0223 10:16:23.468446 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\": container with ID starting with 6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d not found: ID does not exist" containerID="6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.468469 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d"} err="failed to get container status \"6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\": rpc error: code = NotFound desc = could not find container \"6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\": container with ID starting with 6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.468488 4904 scope.go:117] "RemoveContainer" containerID="9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281" Feb 23 10:16:23 crc kubenswrapper[4904]: E0223 10:16:23.468941 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\": container with ID starting with 9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281 not found: ID does not exist" containerID="9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.468959 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281"} err="failed to get container status \"9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\": rpc error: code = NotFound desc = could not find container \"9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\": container with ID starting with 9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281 not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.468972 4904 scope.go:117] "RemoveContainer" containerID="71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63" Feb 23 10:16:23 crc kubenswrapper[4904]: E0223 10:16:23.469340 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\": container with ID starting with 71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63 not found: ID does not exist" containerID="71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.469366 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63"} err="failed to get container status \"71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\": rpc error: code = NotFound desc = could not find container \"71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\": container with ID starting with 71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63 not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.469385 4904 scope.go:117] "RemoveContainer" containerID="db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b" Feb 23 10:16:23 crc kubenswrapper[4904]: E0223 10:16:23.469687 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\": container with ID starting with db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b not found: ID does not exist" containerID="db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.469763 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b"} err="failed to get container status \"db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\": rpc error: code = NotFound desc = could not find container \"db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\": container with ID starting with db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.469798 4904 scope.go:117] "RemoveContainer" containerID="fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3" Feb 23 10:16:23 crc kubenswrapper[4904]: E0223 10:16:23.470153 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\": container with ID starting with fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3 not found: ID does not exist" containerID="fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.470178 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3"} err="failed to get container status \"fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\": rpc error: code = NotFound desc = could not find container \"fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\": container with ID starting with fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3 not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.470197 4904 scope.go:117] "RemoveContainer" containerID="2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d" Feb 23 10:16:23 crc kubenswrapper[4904]: E0223 10:16:23.470444 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\": container with ID starting with 2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d not found: ID does not exist" containerID="2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.470469 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d"} err="failed to get container status \"2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\": rpc error: code = NotFound desc = could not find container \"2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\": container with ID starting with 2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.470483 4904 scope.go:117] "RemoveContainer" containerID="ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d" Feb 23 10:16:23 crc kubenswrapper[4904]: E0223 10:16:23.470768 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\": container with ID starting with ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d not found: ID does not exist" containerID="ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.470801 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d"} err="failed to get container status \"ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\": rpc error: code = NotFound desc = could not find container \"ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\": container with ID starting with ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.470819 4904 scope.go:117] "RemoveContainer" containerID="30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700" Feb 23 10:16:23 crc kubenswrapper[4904]: E0223 10:16:23.471092 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\": container with ID starting with 30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700 not found: ID does not exist" containerID="30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.471124 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700"} err="failed to get container status \"30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\": rpc error: code = NotFound desc = could not find container \"30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\": container with ID starting with 30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700 not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.471144 4904 scope.go:117] "RemoveContainer" containerID="2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.471574 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103"} err="failed to get container status \"2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103\": rpc error: code = NotFound desc = could not find container \"2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103\": container with ID starting with 2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103 not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.471599 4904 scope.go:117] "RemoveContainer" containerID="94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.471897 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7"} err="failed to get container status \"94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7\": rpc error: code = NotFound desc = could not find container \"94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7\": container with ID starting with 94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7 not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.471920 4904 scope.go:117] "RemoveContainer" containerID="6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.472158 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d"} err="failed to get container status \"6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\": rpc error: code = NotFound desc = could not find container \"6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\": container with ID starting with 6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.472185 4904 scope.go:117] "RemoveContainer" containerID="9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.472493 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281"} err="failed to get container status \"9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\": rpc error: code = NotFound desc = could not find container \"9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\": container with ID starting with 9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281 not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.472511 4904 scope.go:117] "RemoveContainer" containerID="71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.472776 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63"} err="failed to get container status \"71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\": rpc error: code = NotFound desc = could not find container \"71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\": container with ID starting with 71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63 not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.472796 4904 scope.go:117] "RemoveContainer" containerID="db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.473081 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b"} err="failed to get container status \"db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\": rpc error: code = NotFound desc = could not find container \"db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\": container with ID starting with db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.473114 4904 scope.go:117] "RemoveContainer" containerID="fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.473404 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3"} err="failed to get container status \"fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\": rpc error: code = NotFound desc = could not find container \"fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\": container with ID starting with fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3 not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.473445 4904 scope.go:117] "RemoveContainer" containerID="2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.473747 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d"} err="failed to get container status \"2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\": rpc error: code = NotFound desc = could not find container \"2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\": container with ID starting with 2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.473774 4904 scope.go:117] "RemoveContainer" containerID="ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.474031 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d"} err="failed to get container status \"ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\": rpc error: code = NotFound desc = could not find container \"ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\": container with ID starting with ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.474053 4904 scope.go:117] "RemoveContainer" containerID="30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.474292 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700"} err="failed to get container status \"30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\": rpc error: code = NotFound desc = could not find container \"30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\": container with ID starting with 30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700 not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.474314 4904 scope.go:117] "RemoveContainer" containerID="2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.474543 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103"} err="failed to get container status \"2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103\": rpc error: code = NotFound desc = could not find container \"2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103\": container with ID starting with 2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103 not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.474562 4904 scope.go:117] "RemoveContainer" containerID="94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.474847 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7"} err="failed to get container status \"94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7\": rpc error: code = NotFound desc = could not find container \"94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7\": container with ID starting with 94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7 not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.474864 4904 scope.go:117] "RemoveContainer" containerID="6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.475176 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d"} err="failed to get container status \"6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\": rpc error: code = NotFound desc = could not find container \"6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\": container with ID starting with 6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.475189 4904 scope.go:117] "RemoveContainer" containerID="9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.475432 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281"} err="failed to get container status \"9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\": rpc error: code = NotFound desc = could not find container \"9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\": container with ID starting with 9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281 not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.475445 4904 scope.go:117] "RemoveContainer" containerID="71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.475665 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63"} err="failed to get container status \"71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\": rpc error: code = NotFound desc = could not find container \"71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\": container with ID starting with 71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63 not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.475681 4904 scope.go:117] "RemoveContainer" containerID="db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.475899 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b"} err="failed to get container status \"db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\": rpc error: code = NotFound desc = could not find container \"db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\": container with ID starting with db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.475912 4904 scope.go:117] "RemoveContainer" containerID="fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.476141 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3"} err="failed to get container status \"fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\": rpc error: code = NotFound desc = could not find container \"fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\": container with ID starting with fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3 not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.476159 4904 scope.go:117] "RemoveContainer" containerID="2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.476398 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d"} err="failed to get container status \"2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\": rpc error: code = NotFound desc = could not find container \"2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\": container with ID starting with 2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.476419 4904 scope.go:117] "RemoveContainer" containerID="ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.476624 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d"} err="failed to get container status \"ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\": rpc error: code = NotFound desc = could not find container \"ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\": container with ID starting with ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.476638 4904 scope.go:117] "RemoveContainer" containerID="30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.476879 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700"} err="failed to get container status \"30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\": rpc error: code = NotFound desc = could not find container \"30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\": container with ID starting with 30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700 not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.476905 4904 scope.go:117] "RemoveContainer" containerID="2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.477188 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103"} err="failed to get container status \"2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103\": rpc error: code = NotFound desc = could not find container \"2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103\": container with ID starting with 2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103 not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.477211 4904 scope.go:117] "RemoveContainer" containerID="94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.477558 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7"} err="failed to get container status \"94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7\": rpc error: code = NotFound desc = could not find container \"94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7\": container with ID starting with 94eab6f0a8a80eb5a509c56adfb8d9e8e6f7399140efa79233d2429bd04f63d7 not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.477579 4904 scope.go:117] "RemoveContainer" containerID="6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.478080 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d"} err="failed to get container status \"6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\": rpc error: code = NotFound desc = could not find container \"6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d\": container with ID starting with 6d62da76a1b8f6cda3a536e276de650736a3123c3ff4b2350477f4a798eb9e5d not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.478099 4904 scope.go:117] "RemoveContainer" containerID="9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.478507 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281"} err="failed to get container status \"9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\": rpc error: code = NotFound desc = could not find container \"9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281\": container with ID starting with 9064a86ba9f1ea0a6c5af36ae972f14344b2c6e7cd9fc1bcd1b89b8457316281 not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.478545 4904 scope.go:117] "RemoveContainer" containerID="71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.478819 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63"} err="failed to get container status \"71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\": rpc error: code = NotFound desc = could not find container \"71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63\": container with ID starting with 71aa7a46f3d9a550ae6c25c9d1f6b716123da7a2b2e0254e12437ef78bac2c63 not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.478839 4904 scope.go:117] "RemoveContainer" containerID="db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.479541 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b"} err="failed to get container status \"db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\": rpc error: code = NotFound desc = could not find container \"db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b\": container with ID starting with db21b9c7559b5bb4fb182720eaa931ba99716e6b9e7a47ea1458fff14db5c69b not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.479563 4904 scope.go:117] "RemoveContainer" containerID="fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.479827 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3"} err="failed to get container status \"fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\": rpc error: code = NotFound desc = could not find container \"fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3\": container with ID starting with fc765528b4e29da171c0a0d4cdb1b27a439a7278c649b75759b68d9af806c9b3 not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.479844 4904 scope.go:117] "RemoveContainer" containerID="2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.480121 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d"} err="failed to get container status \"2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\": rpc error: code = NotFound desc = could not find container \"2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d\": container with ID starting with 2ae4f834872439abd978acc620c14d52c3fd2afa0421bbf1554bccabfa2b498d not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.480146 4904 scope.go:117] "RemoveContainer" containerID="ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.480394 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d"} err="failed to get container status \"ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\": rpc error: code = NotFound desc = could not find container \"ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d\": container with ID starting with ed08809ae4b1a275f297d8191845c8db06d02175357e9cfaa65c547adf71ee3d not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.480414 4904 scope.go:117] "RemoveContainer" containerID="30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.480672 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700"} err="failed to get container status \"30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\": rpc error: code = NotFound desc = could not find container \"30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700\": container with ID starting with 30ad5a5309089c0cfedee5ffeaa133441a19deca6eb2ad61a2b5d825d8c2b700 not found: ID does not exist" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.480690 4904 scope.go:117] "RemoveContainer" containerID="2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103" Feb 23 10:16:23 crc kubenswrapper[4904]: I0223 10:16:23.481548 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103"} err="failed to get container status \"2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103\": rpc error: code = NotFound desc = could not find container \"2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103\": container with ID starting with 2f3a8d7a14f270b995b1157c9b6b75b138d73822593af1ef28a01e679613b103 not found: ID does not exist" Feb 23 10:16:24 crc kubenswrapper[4904]: I0223 10:16:24.280597 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" event={"ID":"4010244d-6472-4015-9951-4aacf4f0d769","Type":"ContainerStarted","Data":"31a41632af2b59aab0c28d429e06b0d2390a869dad4300e97517dc09b807843e"} Feb 23 10:16:24 crc kubenswrapper[4904]: I0223 10:16:24.280921 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" event={"ID":"4010244d-6472-4015-9951-4aacf4f0d769","Type":"ContainerStarted","Data":"a0ae21417bc60be64e19b6fa6f9b2c80a50e928d1bf2b287b7c66678234ac08f"} Feb 23 10:16:24 crc kubenswrapper[4904]: I0223 10:16:24.280936 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" event={"ID":"4010244d-6472-4015-9951-4aacf4f0d769","Type":"ContainerStarted","Data":"53524195407e12f54211eef627aedbc07af0e328785330f31dba2846aeace704"} Feb 23 10:16:24 crc kubenswrapper[4904]: I0223 10:16:24.280945 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" event={"ID":"4010244d-6472-4015-9951-4aacf4f0d769","Type":"ContainerStarted","Data":"e820eb787c2ed1089892dcdb037820c7e38baf3075899c241716cd169515e33e"} Feb 23 10:16:24 crc kubenswrapper[4904]: I0223 10:16:24.280954 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" event={"ID":"4010244d-6472-4015-9951-4aacf4f0d769","Type":"ContainerStarted","Data":"4fbc0c1893c922b18c5c54bcac2fb0fe6d96f378718a0f75a0afb2321efc1f93"} Feb 23 10:16:24 crc kubenswrapper[4904]: I0223 10:16:24.280962 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" event={"ID":"4010244d-6472-4015-9951-4aacf4f0d769","Type":"ContainerStarted","Data":"8cfa1958997bee858c509a29ed22e28c943bf9c6c84b76b2f708379df269a725"} Feb 23 10:16:25 crc kubenswrapper[4904]: I0223 10:16:25.262468 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0acf61bd-42c5-4566-ac29-815afead2012" path="/var/lib/kubelet/pods/0acf61bd-42c5-4566-ac29-815afead2012/volumes" Feb 23 10:16:26 crc kubenswrapper[4904]: I0223 10:16:26.299672 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" event={"ID":"4010244d-6472-4015-9951-4aacf4f0d769","Type":"ContainerStarted","Data":"21eaadcc6a9d5365667e2b0b59292cee9d38805ac084e997c5dfbd6faa970e9d"} Feb 23 10:16:29 crc kubenswrapper[4904]: I0223 10:16:29.322186 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" event={"ID":"4010244d-6472-4015-9951-4aacf4f0d769","Type":"ContainerStarted","Data":"2d717021d121d2f8041afa72c70c709001d2ccd5d6035df78a2f8e435786f66a"} Feb 23 10:16:29 crc kubenswrapper[4904]: I0223 10:16:29.323496 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:29 crc kubenswrapper[4904]: I0223 10:16:29.323549 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:29 crc kubenswrapper[4904]: I0223 10:16:29.355174 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" podStartSLOduration=7.355145455 podStartE2EDuration="7.355145455s" podCreationTimestamp="2026-02-23 10:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:16:29.353573791 +0000 UTC m=+622.773947324" watchObservedRunningTime="2026-02-23 10:16:29.355145455 +0000 UTC m=+622.775518978" Feb 23 10:16:29 crc kubenswrapper[4904]: I0223 10:16:29.358731 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:30 crc kubenswrapper[4904]: I0223 10:16:30.331324 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:30 crc kubenswrapper[4904]: I0223 10:16:30.392769 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:34 crc kubenswrapper[4904]: I0223 10:16:34.255382 4904 scope.go:117] "RemoveContainer" containerID="152bdc6379dd6bc50cffe55466797d4f53ac52eeb68ca86ce1e5e4b6ef052b83" Feb 23 10:16:35 crc kubenswrapper[4904]: I0223 10:16:35.361926 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fm2n2_65ad73a3-cf4b-49ec-b994-2d52cb43bc76/kube-multus/1.log" Feb 23 10:16:35 crc kubenswrapper[4904]: I0223 10:16:35.364743 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fm2n2_65ad73a3-cf4b-49ec-b994-2d52cb43bc76/kube-multus/0.log" Feb 23 10:16:35 crc kubenswrapper[4904]: I0223 10:16:35.364917 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fm2n2" event={"ID":"65ad73a3-cf4b-49ec-b994-2d52cb43bc76","Type":"ContainerStarted","Data":"fcced2032c72bc2e707a4904c2150ae3907005578135865f28ca81e77f58a149"} Feb 23 10:16:47 crc kubenswrapper[4904]: I0223 10:16:47.398397 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:16:47 crc kubenswrapper[4904]: I0223 10:16:47.398954 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:16:47 crc kubenswrapper[4904]: I0223 10:16:47.399009 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:16:47 crc kubenswrapper[4904]: I0223 10:16:47.399558 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c2bc8b3e78a2b6bca2525dc93766b010390fb3eb8142d793d1bb25245ce12c0"} pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 10:16:47 crc kubenswrapper[4904]: I0223 10:16:47.399598 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" containerID="cri-o://5c2bc8b3e78a2b6bca2525dc93766b010390fb3eb8142d793d1bb25245ce12c0" gracePeriod=600 Feb 23 10:16:48 crc kubenswrapper[4904]: I0223 10:16:48.450816 4904 generic.go:334] "Generic (PLEG): container finished" podID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerID="5c2bc8b3e78a2b6bca2525dc93766b010390fb3eb8142d793d1bb25245ce12c0" exitCode=0 Feb 23 10:16:48 crc kubenswrapper[4904]: I0223 10:16:48.450882 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerDied","Data":"5c2bc8b3e78a2b6bca2525dc93766b010390fb3eb8142d793d1bb25245ce12c0"} Feb 23 10:16:48 crc kubenswrapper[4904]: I0223 10:16:48.451528 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerStarted","Data":"5bd1bf756dbf679d7fd1d8585fe9574e3cbefdb7a18e5dc940344b15e289c6d4"} Feb 23 10:16:48 crc kubenswrapper[4904]: I0223 10:16:48.451558 4904 scope.go:117] "RemoveContainer" containerID="ae646e613a51e7f064fed7847be41ac44a2747d1308e11ccf82810a5d1a00115" Feb 23 10:16:48 crc kubenswrapper[4904]: I0223 10:16:48.834430 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk"] Feb 23 10:16:48 crc kubenswrapper[4904]: I0223 10:16:48.835501 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk" Feb 23 10:16:48 crc kubenswrapper[4904]: I0223 10:16:48.838468 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 23 10:16:48 crc kubenswrapper[4904]: I0223 10:16:48.845950 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk"] Feb 23 10:16:48 crc kubenswrapper[4904]: I0223 10:16:48.996983 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13f8b25e-a541-452d-abba-022d7d0f2ae1-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk\" (UID: \"13f8b25e-a541-452d-abba-022d7d0f2ae1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk" Feb 23 10:16:48 crc kubenswrapper[4904]: I0223 10:16:48.997016 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13f8b25e-a541-452d-abba-022d7d0f2ae1-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk\" (UID: \"13f8b25e-a541-452d-abba-022d7d0f2ae1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk" Feb 23 10:16:48 crc kubenswrapper[4904]: I0223 10:16:48.997037 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg7xc\" (UniqueName: \"kubernetes.io/projected/13f8b25e-a541-452d-abba-022d7d0f2ae1-kube-api-access-bg7xc\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk\" (UID: \"13f8b25e-a541-452d-abba-022d7d0f2ae1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk" Feb 23 10:16:49 crc kubenswrapper[4904]: I0223 10:16:49.098196 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13f8b25e-a541-452d-abba-022d7d0f2ae1-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk\" (UID: \"13f8b25e-a541-452d-abba-022d7d0f2ae1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk" Feb 23 10:16:49 crc kubenswrapper[4904]: I0223 10:16:49.098240 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13f8b25e-a541-452d-abba-022d7d0f2ae1-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk\" (UID: \"13f8b25e-a541-452d-abba-022d7d0f2ae1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk" Feb 23 10:16:49 crc kubenswrapper[4904]: I0223 10:16:49.098261 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg7xc\" (UniqueName: \"kubernetes.io/projected/13f8b25e-a541-452d-abba-022d7d0f2ae1-kube-api-access-bg7xc\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk\" (UID: \"13f8b25e-a541-452d-abba-022d7d0f2ae1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk" Feb 23 10:16:49 crc kubenswrapper[4904]: I0223 10:16:49.098849 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13f8b25e-a541-452d-abba-022d7d0f2ae1-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk\" (UID: \"13f8b25e-a541-452d-abba-022d7d0f2ae1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk" Feb 23 10:16:49 crc kubenswrapper[4904]: I0223 10:16:49.098930 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13f8b25e-a541-452d-abba-022d7d0f2ae1-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk\" (UID: \"13f8b25e-a541-452d-abba-022d7d0f2ae1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk" Feb 23 10:16:49 crc kubenswrapper[4904]: I0223 10:16:49.119986 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg7xc\" (UniqueName: \"kubernetes.io/projected/13f8b25e-a541-452d-abba-022d7d0f2ae1-kube-api-access-bg7xc\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk\" (UID: \"13f8b25e-a541-452d-abba-022d7d0f2ae1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk" Feb 23 10:16:49 crc kubenswrapper[4904]: I0223 10:16:49.152665 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk" Feb 23 10:16:49 crc kubenswrapper[4904]: I0223 10:16:49.579469 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk"] Feb 23 10:16:50 crc kubenswrapper[4904]: I0223 10:16:50.465359 4904 generic.go:334] "Generic (PLEG): container finished" podID="13f8b25e-a541-452d-abba-022d7d0f2ae1" containerID="28f91606946b3c7f554d1c58a6f48eb7eae81c9dd9da000561956ea330ad61fc" exitCode=0 Feb 23 10:16:50 crc kubenswrapper[4904]: I0223 10:16:50.465431 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk" event={"ID":"13f8b25e-a541-452d-abba-022d7d0f2ae1","Type":"ContainerDied","Data":"28f91606946b3c7f554d1c58a6f48eb7eae81c9dd9da000561956ea330ad61fc"} Feb 23 10:16:50 crc kubenswrapper[4904]: I0223 10:16:50.465468 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk" event={"ID":"13f8b25e-a541-452d-abba-022d7d0f2ae1","Type":"ContainerStarted","Data":"92fe6696e814f6120e111a895ab13559f9404f5d13afa710b4775bf33b6c565d"} Feb 23 10:16:52 crc kubenswrapper[4904]: I0223 10:16:52.480399 4904 generic.go:334] "Generic (PLEG): container finished" podID="13f8b25e-a541-452d-abba-022d7d0f2ae1" containerID="aace365733be3b2a5e494cf589112fbed140b12addcea01b606e6fed0139ada3" exitCode=0 Feb 23 10:16:52 crc kubenswrapper[4904]: I0223 10:16:52.480500 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk" event={"ID":"13f8b25e-a541-452d-abba-022d7d0f2ae1","Type":"ContainerDied","Data":"aace365733be3b2a5e494cf589112fbed140b12addcea01b606e6fed0139ada3"} Feb 23 10:16:53 crc kubenswrapper[4904]: I0223 10:16:53.084847 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d7rgp" Feb 23 10:16:53 crc kubenswrapper[4904]: I0223 10:16:53.280778 4904 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod0acf61bd-42c5-4566-ac29-815afead2012"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod0acf61bd-42c5-4566-ac29-815afead2012] : Timed out while waiting for systemd to remove kubepods-burstable-pod0acf61bd_42c5_4566_ac29_815afead2012.slice" Feb 23 10:16:53 crc kubenswrapper[4904]: I0223 10:16:53.706012 4904 generic.go:334] "Generic (PLEG): container finished" podID="13f8b25e-a541-452d-abba-022d7d0f2ae1" containerID="f9bbfb634cef21b15d9979ff7694a456227deaadb3ff5639b755ed2d2a9c1812" exitCode=0 Feb 23 10:16:53 crc kubenswrapper[4904]: I0223 10:16:53.706080 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk" event={"ID":"13f8b25e-a541-452d-abba-022d7d0f2ae1","Type":"ContainerDied","Data":"f9bbfb634cef21b15d9979ff7694a456227deaadb3ff5639b755ed2d2a9c1812"} Feb 23 10:16:54 crc kubenswrapper[4904]: I0223 10:16:54.978152 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk" Feb 23 10:16:55 crc kubenswrapper[4904]: I0223 10:16:55.080759 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13f8b25e-a541-452d-abba-022d7d0f2ae1-util\") pod \"13f8b25e-a541-452d-abba-022d7d0f2ae1\" (UID: \"13f8b25e-a541-452d-abba-022d7d0f2ae1\") " Feb 23 10:16:55 crc kubenswrapper[4904]: I0223 10:16:55.080893 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13f8b25e-a541-452d-abba-022d7d0f2ae1-bundle\") pod \"13f8b25e-a541-452d-abba-022d7d0f2ae1\" (UID: \"13f8b25e-a541-452d-abba-022d7d0f2ae1\") " Feb 23 10:16:55 crc kubenswrapper[4904]: I0223 10:16:55.081974 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg7xc\" (UniqueName: \"kubernetes.io/projected/13f8b25e-a541-452d-abba-022d7d0f2ae1-kube-api-access-bg7xc\") pod \"13f8b25e-a541-452d-abba-022d7d0f2ae1\" (UID: \"13f8b25e-a541-452d-abba-022d7d0f2ae1\") " Feb 23 10:16:55 crc kubenswrapper[4904]: I0223 10:16:55.082910 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13f8b25e-a541-452d-abba-022d7d0f2ae1-bundle" (OuterVolumeSpecName: "bundle") pod "13f8b25e-a541-452d-abba-022d7d0f2ae1" (UID: "13f8b25e-a541-452d-abba-022d7d0f2ae1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:16:55 crc kubenswrapper[4904]: I0223 10:16:55.090013 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13f8b25e-a541-452d-abba-022d7d0f2ae1-kube-api-access-bg7xc" (OuterVolumeSpecName: "kube-api-access-bg7xc") pod "13f8b25e-a541-452d-abba-022d7d0f2ae1" (UID: "13f8b25e-a541-452d-abba-022d7d0f2ae1"). InnerVolumeSpecName "kube-api-access-bg7xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:16:55 crc kubenswrapper[4904]: I0223 10:16:55.183993 4904 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13f8b25e-a541-452d-abba-022d7d0f2ae1-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:16:55 crc kubenswrapper[4904]: I0223 10:16:55.184028 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg7xc\" (UniqueName: \"kubernetes.io/projected/13f8b25e-a541-452d-abba-022d7d0f2ae1-kube-api-access-bg7xc\") on node \"crc\" DevicePath \"\"" Feb 23 10:16:55 crc kubenswrapper[4904]: I0223 10:16:55.315203 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13f8b25e-a541-452d-abba-022d7d0f2ae1-util" (OuterVolumeSpecName: "util") pod "13f8b25e-a541-452d-abba-022d7d0f2ae1" (UID: "13f8b25e-a541-452d-abba-022d7d0f2ae1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:16:55 crc kubenswrapper[4904]: I0223 10:16:55.387436 4904 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13f8b25e-a541-452d-abba-022d7d0f2ae1-util\") on node \"crc\" DevicePath \"\"" Feb 23 10:16:55 crc kubenswrapper[4904]: I0223 10:16:55.719254 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk" event={"ID":"13f8b25e-a541-452d-abba-022d7d0f2ae1","Type":"ContainerDied","Data":"92fe6696e814f6120e111a895ab13559f9404f5d13afa710b4775bf33b6c565d"} Feb 23 10:16:55 crc kubenswrapper[4904]: I0223 10:16:55.719295 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92fe6696e814f6120e111a895ab13559f9404f5d13afa710b4775bf33b6c565d" Feb 23 10:16:55 crc kubenswrapper[4904]: I0223 10:16:55.719661 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.000222 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-mzcx5"] Feb 23 10:17:05 crc kubenswrapper[4904]: E0223 10:17:05.000994 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f8b25e-a541-452d-abba-022d7d0f2ae1" containerName="util" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.001009 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f8b25e-a541-452d-abba-022d7d0f2ae1" containerName="util" Feb 23 10:17:05 crc kubenswrapper[4904]: E0223 10:17:05.001019 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f8b25e-a541-452d-abba-022d7d0f2ae1" containerName="pull" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.001026 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f8b25e-a541-452d-abba-022d7d0f2ae1" containerName="pull" Feb 23 10:17:05 crc kubenswrapper[4904]: E0223 10:17:05.001040 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f8b25e-a541-452d-abba-022d7d0f2ae1" containerName="extract" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.001047 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f8b25e-a541-452d-abba-022d7d0f2ae1" containerName="extract" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.001163 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="13f8b25e-a541-452d-abba-022d7d0f2ae1" containerName="extract" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.001608 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mzcx5" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.004213 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.004934 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.005082 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-9dxbs" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.024282 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-mzcx5"] Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.122043 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jgf8\" (UniqueName: \"kubernetes.io/projected/0575e4d3-040f-492f-92ff-ea6a433ce2a2-kube-api-access-2jgf8\") pod \"obo-prometheus-operator-68bc856cb9-mzcx5\" (UID: \"0575e4d3-040f-492f-92ff-ea6a433ce2a2\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mzcx5" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.124086 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn"] Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.125082 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.128265 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-96wrc" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.128878 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.139616 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln"] Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.140630 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.143628 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn"] Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.161809 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln"] Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.223435 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/82047769-4adf-4b19-bd85-04bd9c681616-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn\" (UID: \"82047769-4adf-4b19-bd85-04bd9c681616\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.223842 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jgf8\" (UniqueName: \"kubernetes.io/projected/0575e4d3-040f-492f-92ff-ea6a433ce2a2-kube-api-access-2jgf8\") pod \"obo-prometheus-operator-68bc856cb9-mzcx5\" (UID: \"0575e4d3-040f-492f-92ff-ea6a433ce2a2\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mzcx5" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.223868 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/82047769-4adf-4b19-bd85-04bd9c681616-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn\" (UID: \"82047769-4adf-4b19-bd85-04bd9c681616\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.254896 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jgf8\" (UniqueName: \"kubernetes.io/projected/0575e4d3-040f-492f-92ff-ea6a433ce2a2-kube-api-access-2jgf8\") pod \"obo-prometheus-operator-68bc856cb9-mzcx5\" (UID: \"0575e4d3-040f-492f-92ff-ea6a433ce2a2\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mzcx5" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.317975 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mzcx5" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.322294 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-mrt8q"] Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.323063 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mrt8q" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.325514 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8341f5fc-eb1e-4d05-935d-653320cfac4a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln\" (UID: \"8341f5fc-eb1e-4d05-935d-653320cfac4a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.325575 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/82047769-4adf-4b19-bd85-04bd9c681616-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn\" (UID: \"82047769-4adf-4b19-bd85-04bd9c681616\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.325674 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/82047769-4adf-4b19-bd85-04bd9c681616-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn\" (UID: \"82047769-4adf-4b19-bd85-04bd9c681616\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.325748 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8341f5fc-eb1e-4d05-935d-653320cfac4a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln\" (UID: \"8341f5fc-eb1e-4d05-935d-653320cfac4a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.326268 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-p7n5x" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.326460 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.330307 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/82047769-4adf-4b19-bd85-04bd9c681616-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn\" (UID: \"82047769-4adf-4b19-bd85-04bd9c681616\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.330691 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/82047769-4adf-4b19-bd85-04bd9c681616-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn\" (UID: \"82047769-4adf-4b19-bd85-04bd9c681616\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.389653 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-mrt8q"] Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.426760 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b2753c5-297c-45a4-be5d-66e00f49448e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-mrt8q\" (UID: \"6b2753c5-297c-45a4-be5d-66e00f49448e\") " pod="openshift-operators/observability-operator-59bdc8b94-mrt8q" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.426873 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8341f5fc-eb1e-4d05-935d-653320cfac4a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln\" (UID: \"8341f5fc-eb1e-4d05-935d-653320cfac4a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.428366 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjn44\" (UniqueName: \"kubernetes.io/projected/6b2753c5-297c-45a4-be5d-66e00f49448e-kube-api-access-sjn44\") pod \"observability-operator-59bdc8b94-mrt8q\" (UID: \"6b2753c5-297c-45a4-be5d-66e00f49448e\") " pod="openshift-operators/observability-operator-59bdc8b94-mrt8q" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.428407 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8341f5fc-eb1e-4d05-935d-653320cfac4a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln\" (UID: \"8341f5fc-eb1e-4d05-935d-653320cfac4a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.432603 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8341f5fc-eb1e-4d05-935d-653320cfac4a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln\" (UID: \"8341f5fc-eb1e-4d05-935d-653320cfac4a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.441159 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.453552 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8341f5fc-eb1e-4d05-935d-653320cfac4a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln\" (UID: \"8341f5fc-eb1e-4d05-935d-653320cfac4a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.454640 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.529408 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjn44\" (UniqueName: \"kubernetes.io/projected/6b2753c5-297c-45a4-be5d-66e00f49448e-kube-api-access-sjn44\") pod \"observability-operator-59bdc8b94-mrt8q\" (UID: \"6b2753c5-297c-45a4-be5d-66e00f49448e\") " pod="openshift-operators/observability-operator-59bdc8b94-mrt8q" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.529929 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b2753c5-297c-45a4-be5d-66e00f49448e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-mrt8q\" (UID: \"6b2753c5-297c-45a4-be5d-66e00f49448e\") " pod="openshift-operators/observability-operator-59bdc8b94-mrt8q" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.535887 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b2753c5-297c-45a4-be5d-66e00f49448e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-mrt8q\" (UID: \"6b2753c5-297c-45a4-be5d-66e00f49448e\") " pod="openshift-operators/observability-operator-59bdc8b94-mrt8q" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.554878 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-6dm8x"] Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.555099 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjn44\" (UniqueName: \"kubernetes.io/projected/6b2753c5-297c-45a4-be5d-66e00f49448e-kube-api-access-sjn44\") pod \"observability-operator-59bdc8b94-mrt8q\" (UID: \"6b2753c5-297c-45a4-be5d-66e00f49448e\") " pod="openshift-operators/observability-operator-59bdc8b94-mrt8q" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.556540 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-6dm8x" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.563223 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-6dm8x"] Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.570239 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-xg6vj" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.692393 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mrt8q" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.706004 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-mzcx5"] Feb 23 10:17:05 crc kubenswrapper[4904]: W0223 10:17:05.723761 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0575e4d3_040f_492f_92ff_ea6a433ce2a2.slice/crio-fc5bdd63487cc04828c52b7aa18f804cf291df5e3e44dc71389e701fa162c53f WatchSource:0}: Error finding container fc5bdd63487cc04828c52b7aa18f804cf291df5e3e44dc71389e701fa162c53f: Status 404 returned error can't find the container with id fc5bdd63487cc04828c52b7aa18f804cf291df5e3e44dc71389e701fa162c53f Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.732861 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8k2z\" (UniqueName: \"kubernetes.io/projected/e04b69a5-a057-4e3d-81bf-509d8cce4ec4-kube-api-access-x8k2z\") pod \"perses-operator-5bf474d74f-6dm8x\" (UID: \"e04b69a5-a057-4e3d-81bf-509d8cce4ec4\") " pod="openshift-operators/perses-operator-5bf474d74f-6dm8x" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.732954 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e04b69a5-a057-4e3d-81bf-509d8cce4ec4-openshift-service-ca\") pod \"perses-operator-5bf474d74f-6dm8x\" (UID: \"e04b69a5-a057-4e3d-81bf-509d8cce4ec4\") " pod="openshift-operators/perses-operator-5bf474d74f-6dm8x" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.785303 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mzcx5" event={"ID":"0575e4d3-040f-492f-92ff-ea6a433ce2a2","Type":"ContainerStarted","Data":"fc5bdd63487cc04828c52b7aa18f804cf291df5e3e44dc71389e701fa162c53f"} Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.834917 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8k2z\" (UniqueName: \"kubernetes.io/projected/e04b69a5-a057-4e3d-81bf-509d8cce4ec4-kube-api-access-x8k2z\") pod \"perses-operator-5bf474d74f-6dm8x\" (UID: \"e04b69a5-a057-4e3d-81bf-509d8cce4ec4\") " pod="openshift-operators/perses-operator-5bf474d74f-6dm8x" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.835278 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e04b69a5-a057-4e3d-81bf-509d8cce4ec4-openshift-service-ca\") pod \"perses-operator-5bf474d74f-6dm8x\" (UID: \"e04b69a5-a057-4e3d-81bf-509d8cce4ec4\") " pod="openshift-operators/perses-operator-5bf474d74f-6dm8x" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.836215 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e04b69a5-a057-4e3d-81bf-509d8cce4ec4-openshift-service-ca\") pod \"perses-operator-5bf474d74f-6dm8x\" (UID: \"e04b69a5-a057-4e3d-81bf-509d8cce4ec4\") " pod="openshift-operators/perses-operator-5bf474d74f-6dm8x" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.874501 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8k2z\" (UniqueName: \"kubernetes.io/projected/e04b69a5-a057-4e3d-81bf-509d8cce4ec4-kube-api-access-x8k2z\") pod \"perses-operator-5bf474d74f-6dm8x\" (UID: \"e04b69a5-a057-4e3d-81bf-509d8cce4ec4\") " pod="openshift-operators/perses-operator-5bf474d74f-6dm8x" Feb 23 10:17:05 crc kubenswrapper[4904]: I0223 10:17:05.918004 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-6dm8x" Feb 23 10:17:06 crc kubenswrapper[4904]: I0223 10:17:06.029577 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln"] Feb 23 10:17:06 crc kubenswrapper[4904]: I0223 10:17:06.041329 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn"] Feb 23 10:17:06 crc kubenswrapper[4904]: I0223 10:17:06.110104 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-mrt8q"] Feb 23 10:17:06 crc kubenswrapper[4904]: W0223 10:17:06.120321 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b2753c5_297c_45a4_be5d_66e00f49448e.slice/crio-2184f13d9ab68b5a5643f5193d551e4ddf46357d2727bb3b91b7d1fe4eb3c0e4 WatchSource:0}: Error finding container 2184f13d9ab68b5a5643f5193d551e4ddf46357d2727bb3b91b7d1fe4eb3c0e4: Status 404 returned error can't find the container with id 2184f13d9ab68b5a5643f5193d551e4ddf46357d2727bb3b91b7d1fe4eb3c0e4 Feb 23 10:17:06 crc kubenswrapper[4904]: I0223 10:17:06.236267 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-6dm8x"] Feb 23 10:17:06 crc kubenswrapper[4904]: I0223 10:17:06.794155 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-6dm8x" event={"ID":"e04b69a5-a057-4e3d-81bf-509d8cce4ec4","Type":"ContainerStarted","Data":"54f662e28d38a5deb824113322d5737125b061a7a78d35b742ce2329fd20b760"} Feb 23 10:17:06 crc kubenswrapper[4904]: I0223 10:17:06.795278 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln" event={"ID":"8341f5fc-eb1e-4d05-935d-653320cfac4a","Type":"ContainerStarted","Data":"f524829843b88f696616ebedda0aa99270bc33e0f18aec1164a6d260c0fe9e1c"} Feb 23 10:17:06 crc kubenswrapper[4904]: I0223 10:17:06.796138 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn" event={"ID":"82047769-4adf-4b19-bd85-04bd9c681616","Type":"ContainerStarted","Data":"81e1f5a4fec5b570191a0ccde6e4a04ee4287c037619388519625d78ac0f53e0"} Feb 23 10:17:06 crc kubenswrapper[4904]: I0223 10:17:06.797039 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-mrt8q" event={"ID":"6b2753c5-297c-45a4-be5d-66e00f49448e","Type":"ContainerStarted","Data":"2184f13d9ab68b5a5643f5193d551e4ddf46357d2727bb3b91b7d1fe4eb3c0e4"} Feb 23 10:17:07 crc kubenswrapper[4904]: I0223 10:17:07.715890 4904 scope.go:117] "RemoveContainer" containerID="7d519a1ba76ec50b4139ed463b03cefbb972d613c42a23df15916d03415cbb42" Feb 23 10:17:07 crc kubenswrapper[4904]: I0223 10:17:07.850002 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fm2n2_65ad73a3-cf4b-49ec-b994-2d52cb43bc76/kube-multus/1.log" Feb 23 10:17:20 crc kubenswrapper[4904]: E0223 10:17:20.744177 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" Feb 23 10:17:20 crc kubenswrapper[4904]: E0223 10:17:20.744983 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn_openshift-operators(82047769-4adf-4b19-bd85-04bd9c681616): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 10:17:20 crc kubenswrapper[4904]: E0223 10:17:20.746183 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn" podUID="82047769-4adf-4b19-bd85-04bd9c681616" Feb 23 10:17:20 crc kubenswrapper[4904]: E0223 10:17:20.778030 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" Feb 23 10:17:20 crc kubenswrapper[4904]: E0223 10:17:20.778194 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln_openshift-operators(8341f5fc-eb1e-4d05-935d-653320cfac4a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 10:17:20 crc kubenswrapper[4904]: E0223 10:17:20.779333 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln" podUID="8341f5fc-eb1e-4d05-935d-653320cfac4a" Feb 23 10:17:20 crc kubenswrapper[4904]: I0223 10:17:20.990426 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-mrt8q" event={"ID":"6b2753c5-297c-45a4-be5d-66e00f49448e","Type":"ContainerStarted","Data":"f944339990a3bb507e43c43c87f5163d34983f2372ac782a35fdd43bccba7833"} Feb 23 10:17:20 crc kubenswrapper[4904]: I0223 10:17:20.991064 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-mrt8q" Feb 23 10:17:20 crc kubenswrapper[4904]: I0223 10:17:20.992068 4904 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mrt8q container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.39:8081/healthz\": dial tcp 10.217.0.39:8081: connect: connection refused" start-of-body= Feb 23 10:17:20 crc kubenswrapper[4904]: I0223 10:17:20.992112 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mrt8q" podUID="6b2753c5-297c-45a4-be5d-66e00f49448e" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.39:8081/healthz\": dial tcp 10.217.0.39:8081: connect: connection refused" Feb 23 10:17:20 crc kubenswrapper[4904]: I0223 10:17:20.993772 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-6dm8x" event={"ID":"e04b69a5-a057-4e3d-81bf-509d8cce4ec4","Type":"ContainerStarted","Data":"e88847edf46ad6f72a42cc502cdbb29d5ab5e36d25bfa37ba597ac53e3be9952"} Feb 23 10:17:20 crc kubenswrapper[4904]: I0223 10:17:20.993812 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-6dm8x" Feb 23 10:17:20 crc kubenswrapper[4904]: E0223 10:17:20.995572 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn" podUID="82047769-4adf-4b19-bd85-04bd9c681616" Feb 23 10:17:20 crc kubenswrapper[4904]: E0223 10:17:20.995613 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln" podUID="8341f5fc-eb1e-4d05-935d-653320cfac4a" Feb 23 10:17:21 crc kubenswrapper[4904]: I0223 10:17:21.019988 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-mrt8q" podStartSLOduration=1.320990506 podStartE2EDuration="16.019967499s" podCreationTimestamp="2026-02-23 10:17:05 +0000 UTC" firstStartedPulling="2026-02-23 10:17:06.122413249 +0000 UTC m=+659.542786762" lastFinishedPulling="2026-02-23 10:17:20.821390242 +0000 UTC m=+674.241763755" observedRunningTime="2026-02-23 10:17:21.016112221 +0000 UTC m=+674.436485734" watchObservedRunningTime="2026-02-23 10:17:21.019967499 +0000 UTC m=+674.440341012" Feb 23 10:17:21 crc kubenswrapper[4904]: I0223 10:17:21.072134 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-6dm8x" podStartSLOduration=1.535327525 podStartE2EDuration="16.072115839s" podCreationTimestamp="2026-02-23 10:17:05 +0000 UTC" firstStartedPulling="2026-02-23 10:17:06.246362798 +0000 UTC m=+659.666736311" lastFinishedPulling="2026-02-23 10:17:20.783151112 +0000 UTC m=+674.203524625" observedRunningTime="2026-02-23 10:17:21.067470759 +0000 UTC m=+674.487844292" watchObservedRunningTime="2026-02-23 10:17:21.072115839 +0000 UTC m=+674.492489352" Feb 23 10:17:22 crc kubenswrapper[4904]: I0223 10:17:22.000282 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mzcx5" event={"ID":"0575e4d3-040f-492f-92ff-ea6a433ce2a2","Type":"ContainerStarted","Data":"091e2a6e9334cf5189eaaf04ddd349b9bf8aa618bf9d67960bad9177cdb8444e"} Feb 23 10:17:22 crc kubenswrapper[4904]: I0223 10:17:22.020775 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mzcx5" podStartSLOduration=2.949895236 podStartE2EDuration="18.020751337s" podCreationTimestamp="2026-02-23 10:17:04 +0000 UTC" firstStartedPulling="2026-02-23 10:17:05.728401142 +0000 UTC m=+659.148774655" lastFinishedPulling="2026-02-23 10:17:20.799257243 +0000 UTC m=+674.219630756" observedRunningTime="2026-02-23 10:17:22.015618973 +0000 UTC m=+675.435992486" watchObservedRunningTime="2026-02-23 10:17:22.020751337 +0000 UTC m=+675.441124850" Feb 23 10:17:22 crc kubenswrapper[4904]: I0223 10:17:22.027095 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-mrt8q" Feb 23 10:17:25 crc kubenswrapper[4904]: I0223 10:17:25.922129 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-6dm8x" Feb 23 10:17:35 crc kubenswrapper[4904]: I0223 10:17:35.071477 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn" event={"ID":"82047769-4adf-4b19-bd85-04bd9c681616","Type":"ContainerStarted","Data":"7ee17e421bc374af51849b827ac728c86ca03d3f5bc93fea97096b0bc5e27b2b"} Feb 23 10:17:35 crc kubenswrapper[4904]: I0223 10:17:35.095208 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn" podStartSLOduration=2.080152882 podStartE2EDuration="30.095184118s" podCreationTimestamp="2026-02-23 10:17:05 +0000 UTC" firstStartedPulling="2026-02-23 10:17:06.056752841 +0000 UTC m=+659.477126354" lastFinishedPulling="2026-02-23 10:17:34.071784077 +0000 UTC m=+687.492157590" observedRunningTime="2026-02-23 10:17:35.090433855 +0000 UTC m=+688.510807368" watchObservedRunningTime="2026-02-23 10:17:35.095184118 +0000 UTC m=+688.515557641" Feb 23 10:17:36 crc kubenswrapper[4904]: I0223 10:17:36.077362 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln" event={"ID":"8341f5fc-eb1e-4d05-935d-653320cfac4a","Type":"ContainerStarted","Data":"dc71363b9c8e66ce72f315aedd262f305b05a95233cb1304c1f08cdf896c71c7"} Feb 23 10:17:36 crc kubenswrapper[4904]: I0223 10:17:36.135834 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln" podStartSLOduration=-9223372005.718964 podStartE2EDuration="31.13581165s" podCreationTimestamp="2026-02-23 10:17:05 +0000 UTC" firstStartedPulling="2026-02-23 10:17:06.048924892 +0000 UTC m=+659.469298405" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:17:36.126183691 +0000 UTC m=+689.546557204" watchObservedRunningTime="2026-02-23 10:17:36.13581165 +0000 UTC m=+689.556185163" Feb 23 10:17:51 crc kubenswrapper[4904]: I0223 10:17:51.540750 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r"] Feb 23 10:17:51 crc kubenswrapper[4904]: I0223 10:17:51.543047 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r" Feb 23 10:17:51 crc kubenswrapper[4904]: I0223 10:17:51.545368 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 23 10:17:51 crc kubenswrapper[4904]: I0223 10:17:51.556025 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r"] Feb 23 10:17:51 crc kubenswrapper[4904]: I0223 10:17:51.614619 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2dkk\" (UniqueName: \"kubernetes.io/projected/1b89453f-1d71-498c-a07e-96fcb9b64f97-kube-api-access-j2dkk\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r\" (UID: \"1b89453f-1d71-498c-a07e-96fcb9b64f97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r" Feb 23 10:17:51 crc kubenswrapper[4904]: I0223 10:17:51.615087 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1b89453f-1d71-498c-a07e-96fcb9b64f97-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r\" (UID: \"1b89453f-1d71-498c-a07e-96fcb9b64f97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r" Feb 23 10:17:51 crc kubenswrapper[4904]: I0223 10:17:51.615272 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1b89453f-1d71-498c-a07e-96fcb9b64f97-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r\" (UID: \"1b89453f-1d71-498c-a07e-96fcb9b64f97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r" Feb 23 10:17:51 crc kubenswrapper[4904]: I0223 10:17:51.716055 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1b89453f-1d71-498c-a07e-96fcb9b64f97-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r\" (UID: \"1b89453f-1d71-498c-a07e-96fcb9b64f97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r" Feb 23 10:17:51 crc kubenswrapper[4904]: I0223 10:17:51.716178 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2dkk\" (UniqueName: \"kubernetes.io/projected/1b89453f-1d71-498c-a07e-96fcb9b64f97-kube-api-access-j2dkk\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r\" (UID: \"1b89453f-1d71-498c-a07e-96fcb9b64f97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r" Feb 23 10:17:51 crc kubenswrapper[4904]: I0223 10:17:51.716219 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1b89453f-1d71-498c-a07e-96fcb9b64f97-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r\" (UID: \"1b89453f-1d71-498c-a07e-96fcb9b64f97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r" Feb 23 10:17:51 crc kubenswrapper[4904]: I0223 10:17:51.716915 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1b89453f-1d71-498c-a07e-96fcb9b64f97-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r\" (UID: \"1b89453f-1d71-498c-a07e-96fcb9b64f97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r" Feb 23 10:17:51 crc kubenswrapper[4904]: I0223 10:17:51.717004 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1b89453f-1d71-498c-a07e-96fcb9b64f97-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r\" (UID: \"1b89453f-1d71-498c-a07e-96fcb9b64f97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r" Feb 23 10:17:51 crc kubenswrapper[4904]: I0223 10:17:51.758089 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2dkk\" (UniqueName: \"kubernetes.io/projected/1b89453f-1d71-498c-a07e-96fcb9b64f97-kube-api-access-j2dkk\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r\" (UID: \"1b89453f-1d71-498c-a07e-96fcb9b64f97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r" Feb 23 10:17:51 crc kubenswrapper[4904]: I0223 10:17:51.866617 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r" Feb 23 10:17:52 crc kubenswrapper[4904]: I0223 10:17:52.391959 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r"] Feb 23 10:17:52 crc kubenswrapper[4904]: W0223 10:17:52.399878 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b89453f_1d71_498c_a07e_96fcb9b64f97.slice/crio-6b6efbaabf48a8375806792a57b0fff641e7450eb2ce2a7b11ef9b113cf03469 WatchSource:0}: Error finding container 6b6efbaabf48a8375806792a57b0fff641e7450eb2ce2a7b11ef9b113cf03469: Status 404 returned error can't find the container with id 6b6efbaabf48a8375806792a57b0fff641e7450eb2ce2a7b11ef9b113cf03469 Feb 23 10:17:53 crc kubenswrapper[4904]: I0223 10:17:53.187134 4904 generic.go:334] "Generic (PLEG): container finished" podID="1b89453f-1d71-498c-a07e-96fcb9b64f97" containerID="1b9f202f9e5d8c9d086a8a6e8fc36b44abd64fb68ad124515436c28fb098c985" exitCode=0 Feb 23 10:17:53 crc kubenswrapper[4904]: I0223 10:17:53.187217 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r" event={"ID":"1b89453f-1d71-498c-a07e-96fcb9b64f97","Type":"ContainerDied","Data":"1b9f202f9e5d8c9d086a8a6e8fc36b44abd64fb68ad124515436c28fb098c985"} Feb 23 10:17:53 crc kubenswrapper[4904]: I0223 10:17:53.187248 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r" event={"ID":"1b89453f-1d71-498c-a07e-96fcb9b64f97","Type":"ContainerStarted","Data":"6b6efbaabf48a8375806792a57b0fff641e7450eb2ce2a7b11ef9b113cf03469"} Feb 23 10:17:55 crc kubenswrapper[4904]: I0223 10:17:55.217348 4904 generic.go:334] "Generic (PLEG): container finished" podID="1b89453f-1d71-498c-a07e-96fcb9b64f97" containerID="759575611b030d8dcc4e3257d0652df621b061c6e3a637dbaf660b36e1a0481e" exitCode=0 Feb 23 10:17:55 crc kubenswrapper[4904]: I0223 10:17:55.217447 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r" event={"ID":"1b89453f-1d71-498c-a07e-96fcb9b64f97","Type":"ContainerDied","Data":"759575611b030d8dcc4e3257d0652df621b061c6e3a637dbaf660b36e1a0481e"} Feb 23 10:17:56 crc kubenswrapper[4904]: I0223 10:17:56.229231 4904 generic.go:334] "Generic (PLEG): container finished" podID="1b89453f-1d71-498c-a07e-96fcb9b64f97" containerID="7db3c6dc68ed664f4e3f37874033f01a5edaeaf912107332ef2eb450a72734f7" exitCode=0 Feb 23 10:17:56 crc kubenswrapper[4904]: I0223 10:17:56.229345 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r" event={"ID":"1b89453f-1d71-498c-a07e-96fcb9b64f97","Type":"ContainerDied","Data":"7db3c6dc68ed664f4e3f37874033f01a5edaeaf912107332ef2eb450a72734f7"} Feb 23 10:17:57 crc kubenswrapper[4904]: I0223 10:17:57.516017 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r" Feb 23 10:17:57 crc kubenswrapper[4904]: I0223 10:17:57.705804 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2dkk\" (UniqueName: \"kubernetes.io/projected/1b89453f-1d71-498c-a07e-96fcb9b64f97-kube-api-access-j2dkk\") pod \"1b89453f-1d71-498c-a07e-96fcb9b64f97\" (UID: \"1b89453f-1d71-498c-a07e-96fcb9b64f97\") " Feb 23 10:17:57 crc kubenswrapper[4904]: I0223 10:17:57.706119 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1b89453f-1d71-498c-a07e-96fcb9b64f97-bundle\") pod \"1b89453f-1d71-498c-a07e-96fcb9b64f97\" (UID: \"1b89453f-1d71-498c-a07e-96fcb9b64f97\") " Feb 23 10:17:57 crc kubenswrapper[4904]: I0223 10:17:57.706220 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1b89453f-1d71-498c-a07e-96fcb9b64f97-util\") pod \"1b89453f-1d71-498c-a07e-96fcb9b64f97\" (UID: \"1b89453f-1d71-498c-a07e-96fcb9b64f97\") " Feb 23 10:17:57 crc kubenswrapper[4904]: I0223 10:17:57.706566 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b89453f-1d71-498c-a07e-96fcb9b64f97-bundle" (OuterVolumeSpecName: "bundle") pod "1b89453f-1d71-498c-a07e-96fcb9b64f97" (UID: "1b89453f-1d71-498c-a07e-96fcb9b64f97"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:17:57 crc kubenswrapper[4904]: I0223 10:17:57.706856 4904 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1b89453f-1d71-498c-a07e-96fcb9b64f97-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:17:57 crc kubenswrapper[4904]: I0223 10:17:57.712654 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b89453f-1d71-498c-a07e-96fcb9b64f97-kube-api-access-j2dkk" (OuterVolumeSpecName: "kube-api-access-j2dkk") pod "1b89453f-1d71-498c-a07e-96fcb9b64f97" (UID: "1b89453f-1d71-498c-a07e-96fcb9b64f97"). InnerVolumeSpecName "kube-api-access-j2dkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:17:57 crc kubenswrapper[4904]: I0223 10:17:57.732224 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b89453f-1d71-498c-a07e-96fcb9b64f97-util" (OuterVolumeSpecName: "util") pod "1b89453f-1d71-498c-a07e-96fcb9b64f97" (UID: "1b89453f-1d71-498c-a07e-96fcb9b64f97"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:17:57 crc kubenswrapper[4904]: I0223 10:17:57.808582 4904 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1b89453f-1d71-498c-a07e-96fcb9b64f97-util\") on node \"crc\" DevicePath \"\"" Feb 23 10:17:57 crc kubenswrapper[4904]: I0223 10:17:57.808627 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2dkk\" (UniqueName: \"kubernetes.io/projected/1b89453f-1d71-498c-a07e-96fcb9b64f97-kube-api-access-j2dkk\") on node \"crc\" DevicePath \"\"" Feb 23 10:17:58 crc kubenswrapper[4904]: I0223 10:17:58.247892 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r" event={"ID":"1b89453f-1d71-498c-a07e-96fcb9b64f97","Type":"ContainerDied","Data":"6b6efbaabf48a8375806792a57b0fff641e7450eb2ce2a7b11ef9b113cf03469"} Feb 23 10:17:58 crc kubenswrapper[4904]: I0223 10:17:58.247949 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b6efbaabf48a8375806792a57b0fff641e7450eb2ce2a7b11ef9b113cf03469" Feb 23 10:17:58 crc kubenswrapper[4904]: I0223 10:17:58.247908 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r" Feb 23 10:18:03 crc kubenswrapper[4904]: I0223 10:18:03.142604 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-x5kct"] Feb 23 10:18:03 crc kubenswrapper[4904]: E0223 10:18:03.143652 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b89453f-1d71-498c-a07e-96fcb9b64f97" containerName="util" Feb 23 10:18:03 crc kubenswrapper[4904]: I0223 10:18:03.143668 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b89453f-1d71-498c-a07e-96fcb9b64f97" containerName="util" Feb 23 10:18:03 crc kubenswrapper[4904]: E0223 10:18:03.143685 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b89453f-1d71-498c-a07e-96fcb9b64f97" containerName="extract" Feb 23 10:18:03 crc kubenswrapper[4904]: I0223 10:18:03.143692 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b89453f-1d71-498c-a07e-96fcb9b64f97" containerName="extract" Feb 23 10:18:03 crc kubenswrapper[4904]: E0223 10:18:03.143704 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b89453f-1d71-498c-a07e-96fcb9b64f97" containerName="pull" Feb 23 10:18:03 crc kubenswrapper[4904]: I0223 10:18:03.143723 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b89453f-1d71-498c-a07e-96fcb9b64f97" containerName="pull" Feb 23 10:18:03 crc kubenswrapper[4904]: I0223 10:18:03.143830 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b89453f-1d71-498c-a07e-96fcb9b64f97" containerName="extract" Feb 23 10:18:03 crc kubenswrapper[4904]: I0223 10:18:03.144412 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-x5kct" Feb 23 10:18:03 crc kubenswrapper[4904]: I0223 10:18:03.147217 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-xwgsz" Feb 23 10:18:03 crc kubenswrapper[4904]: I0223 10:18:03.147308 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 23 10:18:03 crc kubenswrapper[4904]: I0223 10:18:03.147355 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 23 10:18:03 crc kubenswrapper[4904]: I0223 10:18:03.156618 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-x5kct"] Feb 23 10:18:03 crc kubenswrapper[4904]: I0223 10:18:03.294896 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clbpv\" (UniqueName: \"kubernetes.io/projected/eb53b4a6-a446-46c7-b022-0079015db963-kube-api-access-clbpv\") pod \"nmstate-operator-694c9596b7-x5kct\" (UID: \"eb53b4a6-a446-46c7-b022-0079015db963\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-x5kct" Feb 23 10:18:03 crc kubenswrapper[4904]: I0223 10:18:03.397078 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clbpv\" (UniqueName: \"kubernetes.io/projected/eb53b4a6-a446-46c7-b022-0079015db963-kube-api-access-clbpv\") pod \"nmstate-operator-694c9596b7-x5kct\" (UID: \"eb53b4a6-a446-46c7-b022-0079015db963\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-x5kct" Feb 23 10:18:03 crc kubenswrapper[4904]: I0223 10:18:03.413742 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clbpv\" (UniqueName: \"kubernetes.io/projected/eb53b4a6-a446-46c7-b022-0079015db963-kube-api-access-clbpv\") pod \"nmstate-operator-694c9596b7-x5kct\" (UID: \"eb53b4a6-a446-46c7-b022-0079015db963\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-x5kct" Feb 23 10:18:03 crc kubenswrapper[4904]: I0223 10:18:03.462008 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-x5kct" Feb 23 10:18:03 crc kubenswrapper[4904]: I0223 10:18:03.760244 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-x5kct"] Feb 23 10:18:04 crc kubenswrapper[4904]: I0223 10:18:04.428704 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-x5kct" event={"ID":"eb53b4a6-a446-46c7-b022-0079015db963","Type":"ContainerStarted","Data":"77d53fb226276082b67f03925bd72f3c3327398783acc443e9ef0797852133aa"} Feb 23 10:18:06 crc kubenswrapper[4904]: I0223 10:18:06.447177 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-x5kct" event={"ID":"eb53b4a6-a446-46c7-b022-0079015db963","Type":"ContainerStarted","Data":"dcc67596db7082c2f6e783a7ede82829c0d970dddc9cc06b0df940d297b84c54"} Feb 23 10:18:06 crc kubenswrapper[4904]: I0223 10:18:06.476565 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-x5kct" podStartSLOduration=1.455855227 podStartE2EDuration="3.476526394s" podCreationTimestamp="2026-02-23 10:18:03 +0000 UTC" firstStartedPulling="2026-02-23 10:18:03.768785766 +0000 UTC m=+717.189159279" lastFinishedPulling="2026-02-23 10:18:05.789456933 +0000 UTC m=+719.209830446" observedRunningTime="2026-02-23 10:18:06.473753505 +0000 UTC m=+719.894127038" watchObservedRunningTime="2026-02-23 10:18:06.476526394 +0000 UTC m=+719.896899917" Feb 23 10:18:12 crc kubenswrapper[4904]: I0223 10:18:12.865642 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-m9gsm"] Feb 23 10:18:12 crc kubenswrapper[4904]: I0223 10:18:12.867053 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-m9gsm" Feb 23 10:18:12 crc kubenswrapper[4904]: I0223 10:18:12.869509 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-wwtzj" Feb 23 10:18:12 crc kubenswrapper[4904]: I0223 10:18:12.881209 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-m9gsm"] Feb 23 10:18:12 crc kubenswrapper[4904]: I0223 10:18:12.939294 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtqbz\" (UniqueName: \"kubernetes.io/projected/505d4d9b-90d8-40c4-ac88-7ebe41db94ed-kube-api-access-rtqbz\") pod \"nmstate-metrics-58c85c668d-m9gsm\" (UID: \"505d4d9b-90d8-40c4-ac88-7ebe41db94ed\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-m9gsm" Feb 23 10:18:12 crc kubenswrapper[4904]: I0223 10:18:12.941563 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-nw5sw"] Feb 23 10:18:12 crc kubenswrapper[4904]: I0223 10:18:12.942828 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nw5sw" Feb 23 10:18:12 crc kubenswrapper[4904]: I0223 10:18:12.946620 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 23 10:18:12 crc kubenswrapper[4904]: I0223 10:18:12.950355 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-hrczn"] Feb 23 10:18:12 crc kubenswrapper[4904]: I0223 10:18:12.951474 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-hrczn" Feb 23 10:18:12 crc kubenswrapper[4904]: I0223 10:18:12.987132 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-nw5sw"] Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.040820 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtfrg\" (UniqueName: \"kubernetes.io/projected/103dbc2d-351d-479d-91c4-59c5a650c8e5-kube-api-access-wtfrg\") pod \"nmstate-handler-hrczn\" (UID: \"103dbc2d-351d-479d-91c4-59c5a650c8e5\") " pod="openshift-nmstate/nmstate-handler-hrczn" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.040884 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/103dbc2d-351d-479d-91c4-59c5a650c8e5-nmstate-lock\") pod \"nmstate-handler-hrczn\" (UID: \"103dbc2d-351d-479d-91c4-59c5a650c8e5\") " pod="openshift-nmstate/nmstate-handler-hrczn" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.041021 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtqbz\" (UniqueName: \"kubernetes.io/projected/505d4d9b-90d8-40c4-ac88-7ebe41db94ed-kube-api-access-rtqbz\") pod \"nmstate-metrics-58c85c668d-m9gsm\" (UID: \"505d4d9b-90d8-40c4-ac88-7ebe41db94ed\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-m9gsm" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.041103 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn9jh\" (UniqueName: \"kubernetes.io/projected/f927138f-bb89-4c3a-a543-d5256a3cba5c-kube-api-access-cn9jh\") pod \"nmstate-webhook-866bcb46dc-nw5sw\" (UID: \"f927138f-bb89-4c3a-a543-d5256a3cba5c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nw5sw" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.041160 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/103dbc2d-351d-479d-91c4-59c5a650c8e5-dbus-socket\") pod \"nmstate-handler-hrczn\" (UID: \"103dbc2d-351d-479d-91c4-59c5a650c8e5\") " pod="openshift-nmstate/nmstate-handler-hrczn" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.041188 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/103dbc2d-351d-479d-91c4-59c5a650c8e5-ovs-socket\") pod \"nmstate-handler-hrczn\" (UID: \"103dbc2d-351d-479d-91c4-59c5a650c8e5\") " pod="openshift-nmstate/nmstate-handler-hrczn" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.041272 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f927138f-bb89-4c3a-a543-d5256a3cba5c-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-nw5sw\" (UID: \"f927138f-bb89-4c3a-a543-d5256a3cba5c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nw5sw" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.062424 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtqbz\" (UniqueName: \"kubernetes.io/projected/505d4d9b-90d8-40c4-ac88-7ebe41db94ed-kube-api-access-rtqbz\") pod \"nmstate-metrics-58c85c668d-m9gsm\" (UID: \"505d4d9b-90d8-40c4-ac88-7ebe41db94ed\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-m9gsm" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.078310 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2d2cz"] Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.079038 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2d2cz" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.080550 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-ms7pz" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.081192 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.081935 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.129781 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2d2cz"] Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.142791 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/103dbc2d-351d-479d-91c4-59c5a650c8e5-dbus-socket\") pod \"nmstate-handler-hrczn\" (UID: \"103dbc2d-351d-479d-91c4-59c5a650c8e5\") " pod="openshift-nmstate/nmstate-handler-hrczn" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.143029 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/103dbc2d-351d-479d-91c4-59c5a650c8e5-ovs-socket\") pod \"nmstate-handler-hrczn\" (UID: \"103dbc2d-351d-479d-91c4-59c5a650c8e5\") " pod="openshift-nmstate/nmstate-handler-hrczn" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.143114 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f927138f-bb89-4c3a-a543-d5256a3cba5c-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-nw5sw\" (UID: \"f927138f-bb89-4c3a-a543-d5256a3cba5c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nw5sw" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.143183 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/103dbc2d-351d-479d-91c4-59c5a650c8e5-dbus-socket\") pod \"nmstate-handler-hrczn\" (UID: \"103dbc2d-351d-479d-91c4-59c5a650c8e5\") " pod="openshift-nmstate/nmstate-handler-hrczn" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.143202 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtfrg\" (UniqueName: \"kubernetes.io/projected/103dbc2d-351d-479d-91c4-59c5a650c8e5-kube-api-access-wtfrg\") pod \"nmstate-handler-hrczn\" (UID: \"103dbc2d-351d-479d-91c4-59c5a650c8e5\") " pod="openshift-nmstate/nmstate-handler-hrczn" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.143400 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b7287f83-1d7a-4d2c-a330-36c3ab421222-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-2d2cz\" (UID: \"b7287f83-1d7a-4d2c-a330-36c3ab421222\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2d2cz" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.143450 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mj24\" (UniqueName: \"kubernetes.io/projected/b7287f83-1d7a-4d2c-a330-36c3ab421222-kube-api-access-6mj24\") pod \"nmstate-console-plugin-5c78fc5d65-2d2cz\" (UID: \"b7287f83-1d7a-4d2c-a330-36c3ab421222\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2d2cz" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.143513 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/103dbc2d-351d-479d-91c4-59c5a650c8e5-nmstate-lock\") pod \"nmstate-handler-hrczn\" (UID: \"103dbc2d-351d-479d-91c4-59c5a650c8e5\") " pod="openshift-nmstate/nmstate-handler-hrczn" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.143533 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7287f83-1d7a-4d2c-a330-36c3ab421222-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-2d2cz\" (UID: \"b7287f83-1d7a-4d2c-a330-36c3ab421222\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2d2cz" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.143644 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn9jh\" (UniqueName: \"kubernetes.io/projected/f927138f-bb89-4c3a-a543-d5256a3cba5c-kube-api-access-cn9jh\") pod \"nmstate-webhook-866bcb46dc-nw5sw\" (UID: \"f927138f-bb89-4c3a-a543-d5256a3cba5c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nw5sw" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.143846 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/103dbc2d-351d-479d-91c4-59c5a650c8e5-ovs-socket\") pod \"nmstate-handler-hrczn\" (UID: \"103dbc2d-351d-479d-91c4-59c5a650c8e5\") " pod="openshift-nmstate/nmstate-handler-hrczn" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.143915 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/103dbc2d-351d-479d-91c4-59c5a650c8e5-nmstate-lock\") pod \"nmstate-handler-hrczn\" (UID: \"103dbc2d-351d-479d-91c4-59c5a650c8e5\") " pod="openshift-nmstate/nmstate-handler-hrczn" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.155483 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f927138f-bb89-4c3a-a543-d5256a3cba5c-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-nw5sw\" (UID: \"f927138f-bb89-4c3a-a543-d5256a3cba5c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nw5sw" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.169596 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn9jh\" (UniqueName: \"kubernetes.io/projected/f927138f-bb89-4c3a-a543-d5256a3cba5c-kube-api-access-cn9jh\") pod \"nmstate-webhook-866bcb46dc-nw5sw\" (UID: \"f927138f-bb89-4c3a-a543-d5256a3cba5c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nw5sw" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.170462 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtfrg\" (UniqueName: \"kubernetes.io/projected/103dbc2d-351d-479d-91c4-59c5a650c8e5-kube-api-access-wtfrg\") pod \"nmstate-handler-hrczn\" (UID: \"103dbc2d-351d-479d-91c4-59c5a650c8e5\") " pod="openshift-nmstate/nmstate-handler-hrczn" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.236829 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-m9gsm" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.244860 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b7287f83-1d7a-4d2c-a330-36c3ab421222-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-2d2cz\" (UID: \"b7287f83-1d7a-4d2c-a330-36c3ab421222\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2d2cz" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.244919 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mj24\" (UniqueName: \"kubernetes.io/projected/b7287f83-1d7a-4d2c-a330-36c3ab421222-kube-api-access-6mj24\") pod \"nmstate-console-plugin-5c78fc5d65-2d2cz\" (UID: \"b7287f83-1d7a-4d2c-a330-36c3ab421222\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2d2cz" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.244973 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7287f83-1d7a-4d2c-a330-36c3ab421222-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-2d2cz\" (UID: \"b7287f83-1d7a-4d2c-a330-36c3ab421222\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2d2cz" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.246313 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b7287f83-1d7a-4d2c-a330-36c3ab421222-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-2d2cz\" (UID: \"b7287f83-1d7a-4d2c-a330-36c3ab421222\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2d2cz" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.249881 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7287f83-1d7a-4d2c-a330-36c3ab421222-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-2d2cz\" (UID: \"b7287f83-1d7a-4d2c-a330-36c3ab421222\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2d2cz" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.268308 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nw5sw" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.272414 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mj24\" (UniqueName: \"kubernetes.io/projected/b7287f83-1d7a-4d2c-a330-36c3ab421222-kube-api-access-6mj24\") pod \"nmstate-console-plugin-5c78fc5d65-2d2cz\" (UID: \"b7287f83-1d7a-4d2c-a330-36c3ab421222\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2d2cz" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.281237 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-hrczn" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.294385 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-74649cfd9b-4vflk"] Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.295180 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.307079 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-74649cfd9b-4vflk"] Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.348050 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20bbfffc-6555-4439-9c40-d9ef4474098d-service-ca\") pod \"console-74649cfd9b-4vflk\" (UID: \"20bbfffc-6555-4439-9c40-d9ef4474098d\") " pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.348097 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbg92\" (UniqueName: \"kubernetes.io/projected/20bbfffc-6555-4439-9c40-d9ef4474098d-kube-api-access-tbg92\") pod \"console-74649cfd9b-4vflk\" (UID: \"20bbfffc-6555-4439-9c40-d9ef4474098d\") " pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.348122 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/20bbfffc-6555-4439-9c40-d9ef4474098d-oauth-serving-cert\") pod \"console-74649cfd9b-4vflk\" (UID: \"20bbfffc-6555-4439-9c40-d9ef4474098d\") " pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.348147 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/20bbfffc-6555-4439-9c40-d9ef4474098d-console-oauth-config\") pod \"console-74649cfd9b-4vflk\" (UID: \"20bbfffc-6555-4439-9c40-d9ef4474098d\") " pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.348266 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/20bbfffc-6555-4439-9c40-d9ef4474098d-console-serving-cert\") pod \"console-74649cfd9b-4vflk\" (UID: \"20bbfffc-6555-4439-9c40-d9ef4474098d\") " pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.348308 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20bbfffc-6555-4439-9c40-d9ef4474098d-trusted-ca-bundle\") pod \"console-74649cfd9b-4vflk\" (UID: \"20bbfffc-6555-4439-9c40-d9ef4474098d\") " pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.348403 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/20bbfffc-6555-4439-9c40-d9ef4474098d-console-config\") pod \"console-74649cfd9b-4vflk\" (UID: \"20bbfffc-6555-4439-9c40-d9ef4474098d\") " pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.399425 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2d2cz" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.451243 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbg92\" (UniqueName: \"kubernetes.io/projected/20bbfffc-6555-4439-9c40-d9ef4474098d-kube-api-access-tbg92\") pod \"console-74649cfd9b-4vflk\" (UID: \"20bbfffc-6555-4439-9c40-d9ef4474098d\") " pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.451309 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/20bbfffc-6555-4439-9c40-d9ef4474098d-oauth-serving-cert\") pod \"console-74649cfd9b-4vflk\" (UID: \"20bbfffc-6555-4439-9c40-d9ef4474098d\") " pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.451338 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/20bbfffc-6555-4439-9c40-d9ef4474098d-console-oauth-config\") pod \"console-74649cfd9b-4vflk\" (UID: \"20bbfffc-6555-4439-9c40-d9ef4474098d\") " pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.451376 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/20bbfffc-6555-4439-9c40-d9ef4474098d-console-serving-cert\") pod \"console-74649cfd9b-4vflk\" (UID: \"20bbfffc-6555-4439-9c40-d9ef4474098d\") " pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.451394 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20bbfffc-6555-4439-9c40-d9ef4474098d-trusted-ca-bundle\") pod \"console-74649cfd9b-4vflk\" (UID: \"20bbfffc-6555-4439-9c40-d9ef4474098d\") " pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.451432 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/20bbfffc-6555-4439-9c40-d9ef4474098d-console-config\") pod \"console-74649cfd9b-4vflk\" (UID: \"20bbfffc-6555-4439-9c40-d9ef4474098d\") " pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.451463 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20bbfffc-6555-4439-9c40-d9ef4474098d-service-ca\") pod \"console-74649cfd9b-4vflk\" (UID: \"20bbfffc-6555-4439-9c40-d9ef4474098d\") " pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.452465 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20bbfffc-6555-4439-9c40-d9ef4474098d-service-ca\") pod \"console-74649cfd9b-4vflk\" (UID: \"20bbfffc-6555-4439-9c40-d9ef4474098d\") " pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.453655 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/20bbfffc-6555-4439-9c40-d9ef4474098d-oauth-serving-cert\") pod \"console-74649cfd9b-4vflk\" (UID: \"20bbfffc-6555-4439-9c40-d9ef4474098d\") " pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.455570 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20bbfffc-6555-4439-9c40-d9ef4474098d-trusted-ca-bundle\") pod \"console-74649cfd9b-4vflk\" (UID: \"20bbfffc-6555-4439-9c40-d9ef4474098d\") " pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.456382 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/20bbfffc-6555-4439-9c40-d9ef4474098d-console-config\") pod \"console-74649cfd9b-4vflk\" (UID: \"20bbfffc-6555-4439-9c40-d9ef4474098d\") " pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.460423 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/20bbfffc-6555-4439-9c40-d9ef4474098d-console-serving-cert\") pod \"console-74649cfd9b-4vflk\" (UID: \"20bbfffc-6555-4439-9c40-d9ef4474098d\") " pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.460575 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/20bbfffc-6555-4439-9c40-d9ef4474098d-console-oauth-config\") pod \"console-74649cfd9b-4vflk\" (UID: \"20bbfffc-6555-4439-9c40-d9ef4474098d\") " pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.479016 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbg92\" (UniqueName: \"kubernetes.io/projected/20bbfffc-6555-4439-9c40-d9ef4474098d-kube-api-access-tbg92\") pod \"console-74649cfd9b-4vflk\" (UID: \"20bbfffc-6555-4439-9c40-d9ef4474098d\") " pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.500054 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-hrczn" event={"ID":"103dbc2d-351d-479d-91c4-59c5a650c8e5","Type":"ContainerStarted","Data":"90217670404b1b2c199ae5d0ec8999e5655e416b59a027296b6907a23b4290b2"} Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.575917 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-nw5sw"] Feb 23 10:18:13 crc kubenswrapper[4904]: W0223 10:18:13.581474 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf927138f_bb89_4c3a_a543_d5256a3cba5c.slice/crio-78d1b09c1e6d9431039ec27777c2091051478d1ff7a929a4a4af92a2b9cbb20a WatchSource:0}: Error finding container 78d1b09c1e6d9431039ec27777c2091051478d1ff7a929a4a4af92a2b9cbb20a: Status 404 returned error can't find the container with id 78d1b09c1e6d9431039ec27777c2091051478d1ff7a929a4a4af92a2b9cbb20a Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.623599 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.655692 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2d2cz"] Feb 23 10:18:13 crc kubenswrapper[4904]: W0223 10:18:13.658986 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7287f83_1d7a_4d2c_a330_36c3ab421222.slice/crio-9fe110e2571e4a9e58537b9d126dd1d1d24c2ba491d2d83bc09456eddf79a712 WatchSource:0}: Error finding container 9fe110e2571e4a9e58537b9d126dd1d1d24c2ba491d2d83bc09456eddf79a712: Status 404 returned error can't find the container with id 9fe110e2571e4a9e58537b9d126dd1d1d24c2ba491d2d83bc09456eddf79a712 Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.769691 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-m9gsm"] Feb 23 10:18:13 crc kubenswrapper[4904]: I0223 10:18:13.995818 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-74649cfd9b-4vflk"] Feb 23 10:18:14 crc kubenswrapper[4904]: W0223 10:18:14.001640 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20bbfffc_6555_4439_9c40_d9ef4474098d.slice/crio-4d37db014237273a496ca2df29a18290fde47245c7f36f8617dd6b6b917fe7a6 WatchSource:0}: Error finding container 4d37db014237273a496ca2df29a18290fde47245c7f36f8617dd6b6b917fe7a6: Status 404 returned error can't find the container with id 4d37db014237273a496ca2df29a18290fde47245c7f36f8617dd6b6b917fe7a6 Feb 23 10:18:14 crc kubenswrapper[4904]: I0223 10:18:14.511368 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74649cfd9b-4vflk" event={"ID":"20bbfffc-6555-4439-9c40-d9ef4474098d","Type":"ContainerStarted","Data":"ce46948173c75fe4b984ab4cbfd42c3a80bc78f760a1b184ce99c2fb211e978d"} Feb 23 10:18:14 crc kubenswrapper[4904]: I0223 10:18:14.511832 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74649cfd9b-4vflk" event={"ID":"20bbfffc-6555-4439-9c40-d9ef4474098d","Type":"ContainerStarted","Data":"4d37db014237273a496ca2df29a18290fde47245c7f36f8617dd6b6b917fe7a6"} Feb 23 10:18:14 crc kubenswrapper[4904]: I0223 10:18:14.516086 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nw5sw" event={"ID":"f927138f-bb89-4c3a-a543-d5256a3cba5c","Type":"ContainerStarted","Data":"78d1b09c1e6d9431039ec27777c2091051478d1ff7a929a4a4af92a2b9cbb20a"} Feb 23 10:18:14 crc kubenswrapper[4904]: I0223 10:18:14.517899 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-m9gsm" event={"ID":"505d4d9b-90d8-40c4-ac88-7ebe41db94ed","Type":"ContainerStarted","Data":"4f55c8afd874ef1e996f8cc69b6b6d839459bdc05b92faaadd357df72ad91b46"} Feb 23 10:18:14 crc kubenswrapper[4904]: I0223 10:18:14.519134 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2d2cz" event={"ID":"b7287f83-1d7a-4d2c-a330-36c3ab421222","Type":"ContainerStarted","Data":"9fe110e2571e4a9e58537b9d126dd1d1d24c2ba491d2d83bc09456eddf79a712"} Feb 23 10:18:14 crc kubenswrapper[4904]: I0223 10:18:14.537141 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-74649cfd9b-4vflk" podStartSLOduration=1.537123384 podStartE2EDuration="1.537123384s" podCreationTimestamp="2026-02-23 10:18:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:18:14.532920064 +0000 UTC m=+727.953293577" watchObservedRunningTime="2026-02-23 10:18:14.537123384 +0000 UTC m=+727.957496897" Feb 23 10:18:17 crc kubenswrapper[4904]: I0223 10:18:17.540617 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nw5sw" event={"ID":"f927138f-bb89-4c3a-a543-d5256a3cba5c","Type":"ContainerStarted","Data":"84110a4a655bd867ed5f381ba3ad7665d6d267ac9e08c4ed3f9db3d74fbcd6df"} Feb 23 10:18:17 crc kubenswrapper[4904]: I0223 10:18:17.542953 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-hrczn" event={"ID":"103dbc2d-351d-479d-91c4-59c5a650c8e5","Type":"ContainerStarted","Data":"342087d2639281f071b623ba3ccdbe920212047a283801fba332176cfeaeebe8"} Feb 23 10:18:17 crc kubenswrapper[4904]: I0223 10:18:17.543343 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nw5sw" Feb 23 10:18:17 crc kubenswrapper[4904]: I0223 10:18:17.543444 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-hrczn" Feb 23 10:18:17 crc kubenswrapper[4904]: I0223 10:18:17.547464 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2d2cz" event={"ID":"b7287f83-1d7a-4d2c-a330-36c3ab421222","Type":"ContainerStarted","Data":"401b6808a8c26b830c126377c8d81b28b66bce68501e6a7fd778248daccf304e"} Feb 23 10:18:17 crc kubenswrapper[4904]: I0223 10:18:17.549092 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-m9gsm" event={"ID":"505d4d9b-90d8-40c4-ac88-7ebe41db94ed","Type":"ContainerStarted","Data":"53ed28d5af9717f1ed81f3b9f22beb972e6021adc67fff53c4bb2baae73087b1"} Feb 23 10:18:17 crc kubenswrapper[4904]: I0223 10:18:17.564060 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nw5sw" podStartSLOduration=2.424945433 podStartE2EDuration="5.564042887s" podCreationTimestamp="2026-02-23 10:18:12 +0000 UTC" firstStartedPulling="2026-02-23 10:18:13.587263184 +0000 UTC m=+727.007636697" lastFinishedPulling="2026-02-23 10:18:16.726360648 +0000 UTC m=+730.146734151" observedRunningTime="2026-02-23 10:18:17.56061718 +0000 UTC m=+730.980990693" watchObservedRunningTime="2026-02-23 10:18:17.564042887 +0000 UTC m=+730.984416400" Feb 23 10:18:17 crc kubenswrapper[4904]: I0223 10:18:17.583454 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-hrczn" podStartSLOduration=2.198907798 podStartE2EDuration="5.583434337s" podCreationTimestamp="2026-02-23 10:18:12 +0000 UTC" firstStartedPulling="2026-02-23 10:18:13.361778254 +0000 UTC m=+726.782151757" lastFinishedPulling="2026-02-23 10:18:16.746304793 +0000 UTC m=+730.166678296" observedRunningTime="2026-02-23 10:18:17.575304296 +0000 UTC m=+730.995677819" watchObservedRunningTime="2026-02-23 10:18:17.583434337 +0000 UTC m=+731.003807860" Feb 23 10:18:17 crc kubenswrapper[4904]: I0223 10:18:17.597009 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2d2cz" podStartSLOduration=1.579549884 podStartE2EDuration="4.596945389s" podCreationTimestamp="2026-02-23 10:18:13 +0000 UTC" firstStartedPulling="2026-02-23 10:18:13.709539789 +0000 UTC m=+727.129913302" lastFinishedPulling="2026-02-23 10:18:16.726935294 +0000 UTC m=+730.147308807" observedRunningTime="2026-02-23 10:18:17.594379917 +0000 UTC m=+731.014753430" watchObservedRunningTime="2026-02-23 10:18:17.596945389 +0000 UTC m=+731.017318902" Feb 23 10:18:20 crc kubenswrapper[4904]: I0223 10:18:20.586638 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-m9gsm" event={"ID":"505d4d9b-90d8-40c4-ac88-7ebe41db94ed","Type":"ContainerStarted","Data":"910acabc024895d1924b4c6a35202237b9a326b11f4e1c6c17777efc9fe025a4"} Feb 23 10:18:20 crc kubenswrapper[4904]: I0223 10:18:20.618427 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-m9gsm" podStartSLOduration=3.055716819 podStartE2EDuration="8.618397207s" podCreationTimestamp="2026-02-23 10:18:12 +0000 UTC" firstStartedPulling="2026-02-23 10:18:13.847120258 +0000 UTC m=+727.267493771" lastFinishedPulling="2026-02-23 10:18:19.409800646 +0000 UTC m=+732.830174159" observedRunningTime="2026-02-23 10:18:20.613638092 +0000 UTC m=+734.034011605" watchObservedRunningTime="2026-02-23 10:18:20.618397207 +0000 UTC m=+734.038770740" Feb 23 10:18:23 crc kubenswrapper[4904]: I0223 10:18:23.305961 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-hrczn" Feb 23 10:18:23 crc kubenswrapper[4904]: I0223 10:18:23.624759 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:23 crc kubenswrapper[4904]: I0223 10:18:23.624820 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:23 crc kubenswrapper[4904]: I0223 10:18:23.631259 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:24 crc kubenswrapper[4904]: I0223 10:18:24.614607 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-74649cfd9b-4vflk" Feb 23 10:18:24 crc kubenswrapper[4904]: I0223 10:18:24.724241 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-x6bcw"] Feb 23 10:18:33 crc kubenswrapper[4904]: I0223 10:18:33.275957 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nw5sw" Feb 23 10:18:47 crc kubenswrapper[4904]: I0223 10:18:47.396210 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m"] Feb 23 10:18:47 crc kubenswrapper[4904]: I0223 10:18:47.398009 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:18:47 crc kubenswrapper[4904]: I0223 10:18:47.398082 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:18:47 crc kubenswrapper[4904]: I0223 10:18:47.398535 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m" Feb 23 10:18:47 crc kubenswrapper[4904]: I0223 10:18:47.403311 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m"] Feb 23 10:18:47 crc kubenswrapper[4904]: I0223 10:18:47.403575 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 23 10:18:47 crc kubenswrapper[4904]: I0223 10:18:47.502537 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/279e6193-ebdd-4c4e-8f35-03f3d04040b1-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m\" (UID: \"279e6193-ebdd-4c4e-8f35-03f3d04040b1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m" Feb 23 10:18:47 crc kubenswrapper[4904]: I0223 10:18:47.502621 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr26v\" (UniqueName: \"kubernetes.io/projected/279e6193-ebdd-4c4e-8f35-03f3d04040b1-kube-api-access-jr26v\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m\" (UID: \"279e6193-ebdd-4c4e-8f35-03f3d04040b1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m" Feb 23 10:18:47 crc kubenswrapper[4904]: I0223 10:18:47.502778 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/279e6193-ebdd-4c4e-8f35-03f3d04040b1-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m\" (UID: \"279e6193-ebdd-4c4e-8f35-03f3d04040b1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m" Feb 23 10:18:47 crc kubenswrapper[4904]: I0223 10:18:47.604160 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/279e6193-ebdd-4c4e-8f35-03f3d04040b1-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m\" (UID: \"279e6193-ebdd-4c4e-8f35-03f3d04040b1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m" Feb 23 10:18:47 crc kubenswrapper[4904]: I0223 10:18:47.604212 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr26v\" (UniqueName: \"kubernetes.io/projected/279e6193-ebdd-4c4e-8f35-03f3d04040b1-kube-api-access-jr26v\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m\" (UID: \"279e6193-ebdd-4c4e-8f35-03f3d04040b1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m" Feb 23 10:18:47 crc kubenswrapper[4904]: I0223 10:18:47.604241 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/279e6193-ebdd-4c4e-8f35-03f3d04040b1-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m\" (UID: \"279e6193-ebdd-4c4e-8f35-03f3d04040b1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m" Feb 23 10:18:47 crc kubenswrapper[4904]: I0223 10:18:47.604680 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/279e6193-ebdd-4c4e-8f35-03f3d04040b1-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m\" (UID: \"279e6193-ebdd-4c4e-8f35-03f3d04040b1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m" Feb 23 10:18:47 crc kubenswrapper[4904]: I0223 10:18:47.604703 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/279e6193-ebdd-4c4e-8f35-03f3d04040b1-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m\" (UID: \"279e6193-ebdd-4c4e-8f35-03f3d04040b1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m" Feb 23 10:18:47 crc kubenswrapper[4904]: I0223 10:18:47.632966 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr26v\" (UniqueName: \"kubernetes.io/projected/279e6193-ebdd-4c4e-8f35-03f3d04040b1-kube-api-access-jr26v\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m\" (UID: \"279e6193-ebdd-4c4e-8f35-03f3d04040b1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m" Feb 23 10:18:47 crc kubenswrapper[4904]: I0223 10:18:47.717091 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m" Feb 23 10:18:48 crc kubenswrapper[4904]: I0223 10:18:48.171903 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m"] Feb 23 10:18:48 crc kubenswrapper[4904]: I0223 10:18:48.764064 4904 generic.go:334] "Generic (PLEG): container finished" podID="279e6193-ebdd-4c4e-8f35-03f3d04040b1" containerID="e86b8698421a81c2e9fa67d5d99a3badddd90dcc71725bf18d5e54446f2032f7" exitCode=0 Feb 23 10:18:48 crc kubenswrapper[4904]: I0223 10:18:48.764110 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m" event={"ID":"279e6193-ebdd-4c4e-8f35-03f3d04040b1","Type":"ContainerDied","Data":"e86b8698421a81c2e9fa67d5d99a3badddd90dcc71725bf18d5e54446f2032f7"} Feb 23 10:18:48 crc kubenswrapper[4904]: I0223 10:18:48.764133 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m" event={"ID":"279e6193-ebdd-4c4e-8f35-03f3d04040b1","Type":"ContainerStarted","Data":"890cba09fff8c65f3225e1e154873475b3220bc9936d163092d8d9af978bbf39"} Feb 23 10:18:49 crc kubenswrapper[4904]: I0223 10:18:49.790872 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-x6bcw" podUID="abc78ff8-2055-4dbe-ae4e-67061adfe881" containerName="console" containerID="cri-o://1886f600045dfa4ec05e0e351c9ffa0f87809c72ae3a28d3b68307b0fc7671ef" gracePeriod=15 Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.289115 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-x6bcw_abc78ff8-2055-4dbe-ae4e-67061adfe881/console/0.log" Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.289430 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.440095 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6h2n\" (UniqueName: \"kubernetes.io/projected/abc78ff8-2055-4dbe-ae4e-67061adfe881-kube-api-access-q6h2n\") pod \"abc78ff8-2055-4dbe-ae4e-67061adfe881\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.440155 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/abc78ff8-2055-4dbe-ae4e-67061adfe881-console-config\") pod \"abc78ff8-2055-4dbe-ae4e-67061adfe881\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.440187 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/abc78ff8-2055-4dbe-ae4e-67061adfe881-oauth-serving-cert\") pod \"abc78ff8-2055-4dbe-ae4e-67061adfe881\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.440229 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/abc78ff8-2055-4dbe-ae4e-67061adfe881-console-serving-cert\") pod \"abc78ff8-2055-4dbe-ae4e-67061adfe881\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.440247 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/abc78ff8-2055-4dbe-ae4e-67061adfe881-service-ca\") pod \"abc78ff8-2055-4dbe-ae4e-67061adfe881\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.440286 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc78ff8-2055-4dbe-ae4e-67061adfe881-trusted-ca-bundle\") pod \"abc78ff8-2055-4dbe-ae4e-67061adfe881\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.440315 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/abc78ff8-2055-4dbe-ae4e-67061adfe881-console-oauth-config\") pod \"abc78ff8-2055-4dbe-ae4e-67061adfe881\" (UID: \"abc78ff8-2055-4dbe-ae4e-67061adfe881\") " Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.440837 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abc78ff8-2055-4dbe-ae4e-67061adfe881-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "abc78ff8-2055-4dbe-ae4e-67061adfe881" (UID: "abc78ff8-2055-4dbe-ae4e-67061adfe881"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.441781 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abc78ff8-2055-4dbe-ae4e-67061adfe881-console-config" (OuterVolumeSpecName: "console-config") pod "abc78ff8-2055-4dbe-ae4e-67061adfe881" (UID: "abc78ff8-2055-4dbe-ae4e-67061adfe881"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.443365 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abc78ff8-2055-4dbe-ae4e-67061adfe881-service-ca" (OuterVolumeSpecName: "service-ca") pod "abc78ff8-2055-4dbe-ae4e-67061adfe881" (UID: "abc78ff8-2055-4dbe-ae4e-67061adfe881"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.443411 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abc78ff8-2055-4dbe-ae4e-67061adfe881-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "abc78ff8-2055-4dbe-ae4e-67061adfe881" (UID: "abc78ff8-2055-4dbe-ae4e-67061adfe881"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.446274 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc78ff8-2055-4dbe-ae4e-67061adfe881-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "abc78ff8-2055-4dbe-ae4e-67061adfe881" (UID: "abc78ff8-2055-4dbe-ae4e-67061adfe881"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.446471 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abc78ff8-2055-4dbe-ae4e-67061adfe881-kube-api-access-q6h2n" (OuterVolumeSpecName: "kube-api-access-q6h2n") pod "abc78ff8-2055-4dbe-ae4e-67061adfe881" (UID: "abc78ff8-2055-4dbe-ae4e-67061adfe881"). InnerVolumeSpecName "kube-api-access-q6h2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.446787 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc78ff8-2055-4dbe-ae4e-67061adfe881-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "abc78ff8-2055-4dbe-ae4e-67061adfe881" (UID: "abc78ff8-2055-4dbe-ae4e-67061adfe881"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.541678 4904 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/abc78ff8-2055-4dbe-ae4e-67061adfe881-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.542250 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6h2n\" (UniqueName: \"kubernetes.io/projected/abc78ff8-2055-4dbe-ae4e-67061adfe881-kube-api-access-q6h2n\") on node \"crc\" DevicePath \"\"" Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.542360 4904 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/abc78ff8-2055-4dbe-ae4e-67061adfe881-console-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.542415 4904 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/abc78ff8-2055-4dbe-ae4e-67061adfe881-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.542466 4904 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/abc78ff8-2055-4dbe-ae4e-67061adfe881-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.542514 4904 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/abc78ff8-2055-4dbe-ae4e-67061adfe881-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.542571 4904 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc78ff8-2055-4dbe-ae4e-67061adfe881-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.780443 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-x6bcw_abc78ff8-2055-4dbe-ae4e-67061adfe881/console/0.log" Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.780496 4904 generic.go:334] "Generic (PLEG): container finished" podID="abc78ff8-2055-4dbe-ae4e-67061adfe881" containerID="1886f600045dfa4ec05e0e351c9ffa0f87809c72ae3a28d3b68307b0fc7671ef" exitCode=2 Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.780581 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x6bcw" event={"ID":"abc78ff8-2055-4dbe-ae4e-67061adfe881","Type":"ContainerDied","Data":"1886f600045dfa4ec05e0e351c9ffa0f87809c72ae3a28d3b68307b0fc7671ef"} Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.780620 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x6bcw" event={"ID":"abc78ff8-2055-4dbe-ae4e-67061adfe881","Type":"ContainerDied","Data":"a5f2de12b07db53a1de4d96558e2e8997575c52b3fd098aec47860a299d43173"} Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.780638 4904 scope.go:117] "RemoveContainer" containerID="1886f600045dfa4ec05e0e351c9ffa0f87809c72ae3a28d3b68307b0fc7671ef" Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.781135 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x6bcw" Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.785996 4904 generic.go:334] "Generic (PLEG): container finished" podID="279e6193-ebdd-4c4e-8f35-03f3d04040b1" containerID="c65bb799bed8b9e87d2585b286cf5505441f49492bec69829f31c3469b2dd7e0" exitCode=0 Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.786032 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m" event={"ID":"279e6193-ebdd-4c4e-8f35-03f3d04040b1","Type":"ContainerDied","Data":"c65bb799bed8b9e87d2585b286cf5505441f49492bec69829f31c3469b2dd7e0"} Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.798975 4904 scope.go:117] "RemoveContainer" containerID="1886f600045dfa4ec05e0e351c9ffa0f87809c72ae3a28d3b68307b0fc7671ef" Feb 23 10:18:50 crc kubenswrapper[4904]: E0223 10:18:50.799332 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1886f600045dfa4ec05e0e351c9ffa0f87809c72ae3a28d3b68307b0fc7671ef\": container with ID starting with 1886f600045dfa4ec05e0e351c9ffa0f87809c72ae3a28d3b68307b0fc7671ef not found: ID does not exist" containerID="1886f600045dfa4ec05e0e351c9ffa0f87809c72ae3a28d3b68307b0fc7671ef" Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.799374 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1886f600045dfa4ec05e0e351c9ffa0f87809c72ae3a28d3b68307b0fc7671ef"} err="failed to get container status \"1886f600045dfa4ec05e0e351c9ffa0f87809c72ae3a28d3b68307b0fc7671ef\": rpc error: code = NotFound desc = could not find container \"1886f600045dfa4ec05e0e351c9ffa0f87809c72ae3a28d3b68307b0fc7671ef\": container with ID starting with 1886f600045dfa4ec05e0e351c9ffa0f87809c72ae3a28d3b68307b0fc7671ef not found: ID does not exist" Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.831961 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-x6bcw"] Feb 23 10:18:50 crc kubenswrapper[4904]: I0223 10:18:50.840711 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-x6bcw"] Feb 23 10:18:51 crc kubenswrapper[4904]: I0223 10:18:51.263986 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abc78ff8-2055-4dbe-ae4e-67061adfe881" path="/var/lib/kubelet/pods/abc78ff8-2055-4dbe-ae4e-67061adfe881/volumes" Feb 23 10:18:51 crc kubenswrapper[4904]: I0223 10:18:51.795551 4904 generic.go:334] "Generic (PLEG): container finished" podID="279e6193-ebdd-4c4e-8f35-03f3d04040b1" containerID="13ed3b18a7227c8f490ad435ed75fe1d51c8ee3aa9cde0fe657de7996a11d15d" exitCode=0 Feb 23 10:18:51 crc kubenswrapper[4904]: I0223 10:18:51.795618 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m" event={"ID":"279e6193-ebdd-4c4e-8f35-03f3d04040b1","Type":"ContainerDied","Data":"13ed3b18a7227c8f490ad435ed75fe1d51c8ee3aa9cde0fe657de7996a11d15d"} Feb 23 10:18:53 crc kubenswrapper[4904]: I0223 10:18:53.063311 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m" Feb 23 10:18:53 crc kubenswrapper[4904]: I0223 10:18:53.173092 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/279e6193-ebdd-4c4e-8f35-03f3d04040b1-bundle\") pod \"279e6193-ebdd-4c4e-8f35-03f3d04040b1\" (UID: \"279e6193-ebdd-4c4e-8f35-03f3d04040b1\") " Feb 23 10:18:53 crc kubenswrapper[4904]: I0223 10:18:53.173214 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr26v\" (UniqueName: \"kubernetes.io/projected/279e6193-ebdd-4c4e-8f35-03f3d04040b1-kube-api-access-jr26v\") pod \"279e6193-ebdd-4c4e-8f35-03f3d04040b1\" (UID: \"279e6193-ebdd-4c4e-8f35-03f3d04040b1\") " Feb 23 10:18:53 crc kubenswrapper[4904]: I0223 10:18:53.173323 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/279e6193-ebdd-4c4e-8f35-03f3d04040b1-util\") pod \"279e6193-ebdd-4c4e-8f35-03f3d04040b1\" (UID: \"279e6193-ebdd-4c4e-8f35-03f3d04040b1\") " Feb 23 10:18:53 crc kubenswrapper[4904]: I0223 10:18:53.174101 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/279e6193-ebdd-4c4e-8f35-03f3d04040b1-bundle" (OuterVolumeSpecName: "bundle") pod "279e6193-ebdd-4c4e-8f35-03f3d04040b1" (UID: "279e6193-ebdd-4c4e-8f35-03f3d04040b1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:18:53 crc kubenswrapper[4904]: I0223 10:18:53.180775 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/279e6193-ebdd-4c4e-8f35-03f3d04040b1-kube-api-access-jr26v" (OuterVolumeSpecName: "kube-api-access-jr26v") pod "279e6193-ebdd-4c4e-8f35-03f3d04040b1" (UID: "279e6193-ebdd-4c4e-8f35-03f3d04040b1"). InnerVolumeSpecName "kube-api-access-jr26v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:18:53 crc kubenswrapper[4904]: I0223 10:18:53.187386 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/279e6193-ebdd-4c4e-8f35-03f3d04040b1-util" (OuterVolumeSpecName: "util") pod "279e6193-ebdd-4c4e-8f35-03f3d04040b1" (UID: "279e6193-ebdd-4c4e-8f35-03f3d04040b1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:18:53 crc kubenswrapper[4904]: I0223 10:18:53.274106 4904 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/279e6193-ebdd-4c4e-8f35-03f3d04040b1-util\") on node \"crc\" DevicePath \"\"" Feb 23 10:18:53 crc kubenswrapper[4904]: I0223 10:18:53.274179 4904 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/279e6193-ebdd-4c4e-8f35-03f3d04040b1-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:18:53 crc kubenswrapper[4904]: I0223 10:18:53.274200 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr26v\" (UniqueName: \"kubernetes.io/projected/279e6193-ebdd-4c4e-8f35-03f3d04040b1-kube-api-access-jr26v\") on node \"crc\" DevicePath \"\"" Feb 23 10:18:53 crc kubenswrapper[4904]: I0223 10:18:53.821087 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m" event={"ID":"279e6193-ebdd-4c4e-8f35-03f3d04040b1","Type":"ContainerDied","Data":"890cba09fff8c65f3225e1e154873475b3220bc9936d163092d8d9af978bbf39"} Feb 23 10:18:53 crc kubenswrapper[4904]: I0223 10:18:53.821220 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="890cba09fff8c65f3225e1e154873475b3220bc9936d163092d8d9af978bbf39" Feb 23 10:18:53 crc kubenswrapper[4904]: I0223 10:18:53.821343 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m" Feb 23 10:19:02 crc kubenswrapper[4904]: I0223 10:19:02.743700 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-67c5b68f57-j5bsq"] Feb 23 10:19:02 crc kubenswrapper[4904]: E0223 10:19:02.744554 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="279e6193-ebdd-4c4e-8f35-03f3d04040b1" containerName="extract" Feb 23 10:19:02 crc kubenswrapper[4904]: I0223 10:19:02.744572 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="279e6193-ebdd-4c4e-8f35-03f3d04040b1" containerName="extract" Feb 23 10:19:02 crc kubenswrapper[4904]: E0223 10:19:02.744586 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="279e6193-ebdd-4c4e-8f35-03f3d04040b1" containerName="util" Feb 23 10:19:02 crc kubenswrapper[4904]: I0223 10:19:02.744594 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="279e6193-ebdd-4c4e-8f35-03f3d04040b1" containerName="util" Feb 23 10:19:02 crc kubenswrapper[4904]: E0223 10:19:02.744606 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="279e6193-ebdd-4c4e-8f35-03f3d04040b1" containerName="pull" Feb 23 10:19:02 crc kubenswrapper[4904]: I0223 10:19:02.744614 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="279e6193-ebdd-4c4e-8f35-03f3d04040b1" containerName="pull" Feb 23 10:19:02 crc kubenswrapper[4904]: E0223 10:19:02.744637 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abc78ff8-2055-4dbe-ae4e-67061adfe881" containerName="console" Feb 23 10:19:02 crc kubenswrapper[4904]: I0223 10:19:02.744645 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc78ff8-2055-4dbe-ae4e-67061adfe881" containerName="console" Feb 23 10:19:02 crc kubenswrapper[4904]: I0223 10:19:02.744792 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="279e6193-ebdd-4c4e-8f35-03f3d04040b1" containerName="extract" Feb 23 10:19:02 crc kubenswrapper[4904]: I0223 10:19:02.744809 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="abc78ff8-2055-4dbe-ae4e-67061adfe881" containerName="console" Feb 23 10:19:02 crc kubenswrapper[4904]: I0223 10:19:02.745272 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-67c5b68f57-j5bsq" Feb 23 10:19:02 crc kubenswrapper[4904]: I0223 10:19:02.747174 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 23 10:19:02 crc kubenswrapper[4904]: I0223 10:19:02.747385 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 23 10:19:02 crc kubenswrapper[4904]: I0223 10:19:02.747420 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 23 10:19:02 crc kubenswrapper[4904]: I0223 10:19:02.747494 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-w6zdc" Feb 23 10:19:02 crc kubenswrapper[4904]: I0223 10:19:02.748163 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 23 10:19:02 crc kubenswrapper[4904]: I0223 10:19:02.762414 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-67c5b68f57-j5bsq"] Feb 23 10:19:02 crc kubenswrapper[4904]: I0223 10:19:02.898878 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d0ffe580-b2b6-41cd-ad6b-683ead0174c5-apiservice-cert\") pod \"metallb-operator-controller-manager-67c5b68f57-j5bsq\" (UID: \"d0ffe580-b2b6-41cd-ad6b-683ead0174c5\") " pod="metallb-system/metallb-operator-controller-manager-67c5b68f57-j5bsq" Feb 23 10:19:02 crc kubenswrapper[4904]: I0223 10:19:02.898960 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjmvs\" (UniqueName: \"kubernetes.io/projected/d0ffe580-b2b6-41cd-ad6b-683ead0174c5-kube-api-access-bjmvs\") pod \"metallb-operator-controller-manager-67c5b68f57-j5bsq\" (UID: \"d0ffe580-b2b6-41cd-ad6b-683ead0174c5\") " pod="metallb-system/metallb-operator-controller-manager-67c5b68f57-j5bsq" Feb 23 10:19:02 crc kubenswrapper[4904]: I0223 10:19:02.898983 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d0ffe580-b2b6-41cd-ad6b-683ead0174c5-webhook-cert\") pod \"metallb-operator-controller-manager-67c5b68f57-j5bsq\" (UID: \"d0ffe580-b2b6-41cd-ad6b-683ead0174c5\") " pod="metallb-system/metallb-operator-controller-manager-67c5b68f57-j5bsq" Feb 23 10:19:02 crc kubenswrapper[4904]: I0223 10:19:02.997526 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-755d685c4c-jlswn"] Feb 23 10:19:02 crc kubenswrapper[4904]: I0223 10:19:02.998564 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-755d685c4c-jlswn" Feb 23 10:19:02 crc kubenswrapper[4904]: I0223 10:19:02.999602 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjmvs\" (UniqueName: \"kubernetes.io/projected/d0ffe580-b2b6-41cd-ad6b-683ead0174c5-kube-api-access-bjmvs\") pod \"metallb-operator-controller-manager-67c5b68f57-j5bsq\" (UID: \"d0ffe580-b2b6-41cd-ad6b-683ead0174c5\") " pod="metallb-system/metallb-operator-controller-manager-67c5b68f57-j5bsq" Feb 23 10:19:02 crc kubenswrapper[4904]: I0223 10:19:02.999647 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d0ffe580-b2b6-41cd-ad6b-683ead0174c5-webhook-cert\") pod \"metallb-operator-controller-manager-67c5b68f57-j5bsq\" (UID: \"d0ffe580-b2b6-41cd-ad6b-683ead0174c5\") " pod="metallb-system/metallb-operator-controller-manager-67c5b68f57-j5bsq" Feb 23 10:19:02 crc kubenswrapper[4904]: I0223 10:19:02.999727 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d0ffe580-b2b6-41cd-ad6b-683ead0174c5-apiservice-cert\") pod \"metallb-operator-controller-manager-67c5b68f57-j5bsq\" (UID: \"d0ffe580-b2b6-41cd-ad6b-683ead0174c5\") " pod="metallb-system/metallb-operator-controller-manager-67c5b68f57-j5bsq" Feb 23 10:19:03 crc kubenswrapper[4904]: I0223 10:19:03.005898 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 23 10:19:03 crc kubenswrapper[4904]: I0223 10:19:03.006091 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-jnvdw" Feb 23 10:19:03 crc kubenswrapper[4904]: I0223 10:19:03.006159 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 23 10:19:03 crc kubenswrapper[4904]: I0223 10:19:03.008419 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d0ffe580-b2b6-41cd-ad6b-683ead0174c5-webhook-cert\") pod \"metallb-operator-controller-manager-67c5b68f57-j5bsq\" (UID: \"d0ffe580-b2b6-41cd-ad6b-683ead0174c5\") " pod="metallb-system/metallb-operator-controller-manager-67c5b68f57-j5bsq" Feb 23 10:19:03 crc kubenswrapper[4904]: I0223 10:19:03.008477 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d0ffe580-b2b6-41cd-ad6b-683ead0174c5-apiservice-cert\") pod \"metallb-operator-controller-manager-67c5b68f57-j5bsq\" (UID: \"d0ffe580-b2b6-41cd-ad6b-683ead0174c5\") " pod="metallb-system/metallb-operator-controller-manager-67c5b68f57-j5bsq" Feb 23 10:19:03 crc kubenswrapper[4904]: I0223 10:19:03.025233 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjmvs\" (UniqueName: \"kubernetes.io/projected/d0ffe580-b2b6-41cd-ad6b-683ead0174c5-kube-api-access-bjmvs\") pod \"metallb-operator-controller-manager-67c5b68f57-j5bsq\" (UID: \"d0ffe580-b2b6-41cd-ad6b-683ead0174c5\") " pod="metallb-system/metallb-operator-controller-manager-67c5b68f57-j5bsq" Feb 23 10:19:03 crc kubenswrapper[4904]: I0223 10:19:03.025793 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-755d685c4c-jlswn"] Feb 23 10:19:03 crc kubenswrapper[4904]: I0223 10:19:03.061561 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-67c5b68f57-j5bsq" Feb 23 10:19:03 crc kubenswrapper[4904]: I0223 10:19:03.101364 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/326de05e-dfa0-46e8-be69-d0cd954deb8a-apiservice-cert\") pod \"metallb-operator-webhook-server-755d685c4c-jlswn\" (UID: \"326de05e-dfa0-46e8-be69-d0cd954deb8a\") " pod="metallb-system/metallb-operator-webhook-server-755d685c4c-jlswn" Feb 23 10:19:03 crc kubenswrapper[4904]: I0223 10:19:03.101436 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrmwf\" (UniqueName: \"kubernetes.io/projected/326de05e-dfa0-46e8-be69-d0cd954deb8a-kube-api-access-zrmwf\") pod \"metallb-operator-webhook-server-755d685c4c-jlswn\" (UID: \"326de05e-dfa0-46e8-be69-d0cd954deb8a\") " pod="metallb-system/metallb-operator-webhook-server-755d685c4c-jlswn" Feb 23 10:19:03 crc kubenswrapper[4904]: I0223 10:19:03.101471 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/326de05e-dfa0-46e8-be69-d0cd954deb8a-webhook-cert\") pod \"metallb-operator-webhook-server-755d685c4c-jlswn\" (UID: \"326de05e-dfa0-46e8-be69-d0cd954deb8a\") " pod="metallb-system/metallb-operator-webhook-server-755d685c4c-jlswn" Feb 23 10:19:03 crc kubenswrapper[4904]: I0223 10:19:03.210157 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/326de05e-dfa0-46e8-be69-d0cd954deb8a-apiservice-cert\") pod \"metallb-operator-webhook-server-755d685c4c-jlswn\" (UID: \"326de05e-dfa0-46e8-be69-d0cd954deb8a\") " pod="metallb-system/metallb-operator-webhook-server-755d685c4c-jlswn" Feb 23 10:19:03 crc kubenswrapper[4904]: I0223 10:19:03.210292 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrmwf\" (UniqueName: \"kubernetes.io/projected/326de05e-dfa0-46e8-be69-d0cd954deb8a-kube-api-access-zrmwf\") pod \"metallb-operator-webhook-server-755d685c4c-jlswn\" (UID: \"326de05e-dfa0-46e8-be69-d0cd954deb8a\") " pod="metallb-system/metallb-operator-webhook-server-755d685c4c-jlswn" Feb 23 10:19:03 crc kubenswrapper[4904]: I0223 10:19:03.210329 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/326de05e-dfa0-46e8-be69-d0cd954deb8a-webhook-cert\") pod \"metallb-operator-webhook-server-755d685c4c-jlswn\" (UID: \"326de05e-dfa0-46e8-be69-d0cd954deb8a\") " pod="metallb-system/metallb-operator-webhook-server-755d685c4c-jlswn" Feb 23 10:19:03 crc kubenswrapper[4904]: I0223 10:19:03.237803 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrmwf\" (UniqueName: \"kubernetes.io/projected/326de05e-dfa0-46e8-be69-d0cd954deb8a-kube-api-access-zrmwf\") pod \"metallb-operator-webhook-server-755d685c4c-jlswn\" (UID: \"326de05e-dfa0-46e8-be69-d0cd954deb8a\") " pod="metallb-system/metallb-operator-webhook-server-755d685c4c-jlswn" Feb 23 10:19:03 crc kubenswrapper[4904]: I0223 10:19:03.238941 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/326de05e-dfa0-46e8-be69-d0cd954deb8a-apiservice-cert\") pod \"metallb-operator-webhook-server-755d685c4c-jlswn\" (UID: \"326de05e-dfa0-46e8-be69-d0cd954deb8a\") " pod="metallb-system/metallb-operator-webhook-server-755d685c4c-jlswn" Feb 23 10:19:03 crc kubenswrapper[4904]: I0223 10:19:03.241034 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/326de05e-dfa0-46e8-be69-d0cd954deb8a-webhook-cert\") pod \"metallb-operator-webhook-server-755d685c4c-jlswn\" (UID: \"326de05e-dfa0-46e8-be69-d0cd954deb8a\") " pod="metallb-system/metallb-operator-webhook-server-755d685c4c-jlswn" Feb 23 10:19:03 crc kubenswrapper[4904]: I0223 10:19:03.457690 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-755d685c4c-jlswn" Feb 23 10:19:03 crc kubenswrapper[4904]: I0223 10:19:03.534850 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-67c5b68f57-j5bsq"] Feb 23 10:19:03 crc kubenswrapper[4904]: W0223 10:19:03.562946 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0ffe580_b2b6_41cd_ad6b_683ead0174c5.slice/crio-5e69012e9d708fc4063e32ff0a32e605218c9446a916d1f02b48d82975aeb080 WatchSource:0}: Error finding container 5e69012e9d708fc4063e32ff0a32e605218c9446a916d1f02b48d82975aeb080: Status 404 returned error can't find the container with id 5e69012e9d708fc4063e32ff0a32e605218c9446a916d1f02b48d82975aeb080 Feb 23 10:19:03 crc kubenswrapper[4904]: I0223 10:19:03.749960 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-755d685c4c-jlswn"] Feb 23 10:19:03 crc kubenswrapper[4904]: W0223 10:19:03.760877 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod326de05e_dfa0_46e8_be69_d0cd954deb8a.slice/crio-000a23269244b17a8bcc9518b1216d0a90ca7a41fd70a04c8f361478f9c4cab8 WatchSource:0}: Error finding container 000a23269244b17a8bcc9518b1216d0a90ca7a41fd70a04c8f361478f9c4cab8: Status 404 returned error can't find the container with id 000a23269244b17a8bcc9518b1216d0a90ca7a41fd70a04c8f361478f9c4cab8 Feb 23 10:19:04 crc kubenswrapper[4904]: I0223 10:19:04.215554 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-67c5b68f57-j5bsq" event={"ID":"d0ffe580-b2b6-41cd-ad6b-683ead0174c5","Type":"ContainerStarted","Data":"5e69012e9d708fc4063e32ff0a32e605218c9446a916d1f02b48d82975aeb080"} Feb 23 10:19:04 crc kubenswrapper[4904]: I0223 10:19:04.216526 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-755d685c4c-jlswn" event={"ID":"326de05e-dfa0-46e8-be69-d0cd954deb8a","Type":"ContainerStarted","Data":"000a23269244b17a8bcc9518b1216d0a90ca7a41fd70a04c8f361478f9c4cab8"} Feb 23 10:19:10 crc kubenswrapper[4904]: I0223 10:19:10.268321 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-755d685c4c-jlswn" event={"ID":"326de05e-dfa0-46e8-be69-d0cd954deb8a","Type":"ContainerStarted","Data":"e289dfbea8d4f680a32f6493e3aa058f99be69cbf37fd79ddfa0b3be9dd1f968"} Feb 23 10:19:10 crc kubenswrapper[4904]: I0223 10:19:10.269021 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-755d685c4c-jlswn" Feb 23 10:19:11 crc kubenswrapper[4904]: I0223 10:19:11.276373 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-67c5b68f57-j5bsq" event={"ID":"d0ffe580-b2b6-41cd-ad6b-683ead0174c5","Type":"ContainerStarted","Data":"1508f522986338d75c74b407b9c05ebdf2d8110bcf447de75efcd1d6cc9f28ad"} Feb 23 10:19:11 crc kubenswrapper[4904]: I0223 10:19:11.314064 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-755d685c4c-jlswn" podStartSLOduration=3.928048232 podStartE2EDuration="9.314023682s" podCreationTimestamp="2026-02-23 10:19:02 +0000 UTC" firstStartedPulling="2026-02-23 10:19:03.764789035 +0000 UTC m=+777.185162548" lastFinishedPulling="2026-02-23 10:19:09.150764485 +0000 UTC m=+782.571137998" observedRunningTime="2026-02-23 10:19:10.297527425 +0000 UTC m=+783.717900938" watchObservedRunningTime="2026-02-23 10:19:11.314023682 +0000 UTC m=+784.734397195" Feb 23 10:19:11 crc kubenswrapper[4904]: I0223 10:19:11.320393 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-67c5b68f57-j5bsq" podStartSLOduration=2.005861666 podStartE2EDuration="9.320381012s" podCreationTimestamp="2026-02-23 10:19:02 +0000 UTC" firstStartedPulling="2026-02-23 10:19:03.585879354 +0000 UTC m=+777.006252867" lastFinishedPulling="2026-02-23 10:19:10.9003987 +0000 UTC m=+784.320772213" observedRunningTime="2026-02-23 10:19:11.310311307 +0000 UTC m=+784.730684830" watchObservedRunningTime="2026-02-23 10:19:11.320381012 +0000 UTC m=+784.740754525" Feb 23 10:19:12 crc kubenswrapper[4904]: I0223 10:19:12.167682 4904 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 10:19:12 crc kubenswrapper[4904]: I0223 10:19:12.286957 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-67c5b68f57-j5bsq" Feb 23 10:19:17 crc kubenswrapper[4904]: I0223 10:19:17.398619 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:19:17 crc kubenswrapper[4904]: I0223 10:19:17.399684 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:19:23 crc kubenswrapper[4904]: I0223 10:19:23.464272 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-755d685c4c-jlswn" Feb 23 10:19:43 crc kubenswrapper[4904]: I0223 10:19:43.065278 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-67c5b68f57-j5bsq" Feb 23 10:19:43 crc kubenswrapper[4904]: I0223 10:19:43.831839 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-t48sj"] Feb 23 10:19:43 crc kubenswrapper[4904]: I0223 10:19:43.834103 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:43 crc kubenswrapper[4904]: I0223 10:19:43.839224 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 23 10:19:43 crc kubenswrapper[4904]: I0223 10:19:43.839425 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 23 10:19:43 crc kubenswrapper[4904]: I0223 10:19:43.839570 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-5cbvd" Feb 23 10:19:43 crc kubenswrapper[4904]: I0223 10:19:43.859156 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-54msh"] Feb 23 10:19:43 crc kubenswrapper[4904]: I0223 10:19:43.859936 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-54msh" Feb 23 10:19:43 crc kubenswrapper[4904]: I0223 10:19:43.862673 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 23 10:19:43 crc kubenswrapper[4904]: I0223 10:19:43.871766 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-54msh"] Feb 23 10:19:43 crc kubenswrapper[4904]: I0223 10:19:43.916167 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-97svr"] Feb 23 10:19:43 crc kubenswrapper[4904]: I0223 10:19:43.936928 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-mrscb"] Feb 23 10:19:43 crc kubenswrapper[4904]: I0223 10:19:43.938175 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-97svr" Feb 23 10:19:43 crc kubenswrapper[4904]: I0223 10:19:43.938503 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-mrscb" Feb 23 10:19:43 crc kubenswrapper[4904]: I0223 10:19:43.954847 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 23 10:19:43 crc kubenswrapper[4904]: I0223 10:19:43.954935 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-fhqxb" Feb 23 10:19:43 crc kubenswrapper[4904]: I0223 10:19:43.955077 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 23 10:19:43 crc kubenswrapper[4904]: I0223 10:19:43.955102 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 23 10:19:43 crc kubenswrapper[4904]: I0223 10:19:43.955212 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 23 10:19:43 crc kubenswrapper[4904]: I0223 10:19:43.963319 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-mrscb"] Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.002784 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b985\" (UniqueName: \"kubernetes.io/projected/411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2-kube-api-access-2b985\") pod \"frr-k8s-t48sj\" (UID: \"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2\") " pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.002846 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2-reloader\") pod \"frr-k8s-t48sj\" (UID: \"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2\") " pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.002873 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2-frr-sockets\") pod \"frr-k8s-t48sj\" (UID: \"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2\") " pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.002943 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6pm7\" (UniqueName: \"kubernetes.io/projected/a84cd4c2-05fc-43f7-8dec-16587923b06f-kube-api-access-q6pm7\") pod \"frr-k8s-webhook-server-78b44bf5bb-54msh\" (UID: \"a84cd4c2-05fc-43f7-8dec-16587923b06f\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-54msh" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.003001 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2-metrics-certs\") pod \"frr-k8s-t48sj\" (UID: \"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2\") " pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.003023 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2-frr-conf\") pod \"frr-k8s-t48sj\" (UID: \"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2\") " pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.003057 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2-frr-startup\") pod \"frr-k8s-t48sj\" (UID: \"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2\") " pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.003083 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a84cd4c2-05fc-43f7-8dec-16587923b06f-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-54msh\" (UID: \"a84cd4c2-05fc-43f7-8dec-16587923b06f\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-54msh" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.003125 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2-metrics\") pod \"frr-k8s-t48sj\" (UID: \"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2\") " pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.104352 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4dbb64a-0dcd-47cd-bc72-8c9acb096464-cert\") pod \"controller-69bbfbf88f-mrscb\" (UID: \"a4dbb64a-0dcd-47cd-bc72-8c9acb096464\") " pod="metallb-system/controller-69bbfbf88f-mrscb" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.104403 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b985\" (UniqueName: \"kubernetes.io/projected/411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2-kube-api-access-2b985\") pod \"frr-k8s-t48sj\" (UID: \"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2\") " pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.104429 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2-reloader\") pod \"frr-k8s-t48sj\" (UID: \"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2\") " pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.104451 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsx64\" (UniqueName: \"kubernetes.io/projected/a4dbb64a-0dcd-47cd-bc72-8c9acb096464-kube-api-access-dsx64\") pod \"controller-69bbfbf88f-mrscb\" (UID: \"a4dbb64a-0dcd-47cd-bc72-8c9acb096464\") " pod="metallb-system/controller-69bbfbf88f-mrscb" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.104469 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2-frr-sockets\") pod \"frr-k8s-t48sj\" (UID: \"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2\") " pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.104490 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4dbb64a-0dcd-47cd-bc72-8c9acb096464-metrics-certs\") pod \"controller-69bbfbf88f-mrscb\" (UID: \"a4dbb64a-0dcd-47cd-bc72-8c9acb096464\") " pod="metallb-system/controller-69bbfbf88f-mrscb" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.104518 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fcd8dd0f-8522-4ce8-873b-a7821cba8bd7-metrics-certs\") pod \"speaker-97svr\" (UID: \"fcd8dd0f-8522-4ce8-873b-a7821cba8bd7\") " pod="metallb-system/speaker-97svr" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.104535 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjz28\" (UniqueName: \"kubernetes.io/projected/fcd8dd0f-8522-4ce8-873b-a7821cba8bd7-kube-api-access-bjz28\") pod \"speaker-97svr\" (UID: \"fcd8dd0f-8522-4ce8-873b-a7821cba8bd7\") " pod="metallb-system/speaker-97svr" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.104556 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6pm7\" (UniqueName: \"kubernetes.io/projected/a84cd4c2-05fc-43f7-8dec-16587923b06f-kube-api-access-q6pm7\") pod \"frr-k8s-webhook-server-78b44bf5bb-54msh\" (UID: \"a84cd4c2-05fc-43f7-8dec-16587923b06f\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-54msh" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.104581 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fcd8dd0f-8522-4ce8-873b-a7821cba8bd7-memberlist\") pod \"speaker-97svr\" (UID: \"fcd8dd0f-8522-4ce8-873b-a7821cba8bd7\") " pod="metallb-system/speaker-97svr" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.104606 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2-metrics-certs\") pod \"frr-k8s-t48sj\" (UID: \"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2\") " pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.104621 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fcd8dd0f-8522-4ce8-873b-a7821cba8bd7-metallb-excludel2\") pod \"speaker-97svr\" (UID: \"fcd8dd0f-8522-4ce8-873b-a7821cba8bd7\") " pod="metallb-system/speaker-97svr" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.104636 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2-frr-conf\") pod \"frr-k8s-t48sj\" (UID: \"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2\") " pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.104665 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2-frr-startup\") pod \"frr-k8s-t48sj\" (UID: \"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2\") " pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.104681 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a84cd4c2-05fc-43f7-8dec-16587923b06f-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-54msh\" (UID: \"a84cd4c2-05fc-43f7-8dec-16587923b06f\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-54msh" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.104698 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2-metrics\") pod \"frr-k8s-t48sj\" (UID: \"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2\") " pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.104915 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2-reloader\") pod \"frr-k8s-t48sj\" (UID: \"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2\") " pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.104931 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2-frr-sockets\") pod \"frr-k8s-t48sj\" (UID: \"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2\") " pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.105943 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2-metrics\") pod \"frr-k8s-t48sj\" (UID: \"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2\") " pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.106113 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2-frr-conf\") pod \"frr-k8s-t48sj\" (UID: \"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2\") " pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.106641 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2-frr-startup\") pod \"frr-k8s-t48sj\" (UID: \"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2\") " pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.126433 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2-metrics-certs\") pod \"frr-k8s-t48sj\" (UID: \"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2\") " pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.126467 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a84cd4c2-05fc-43f7-8dec-16587923b06f-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-54msh\" (UID: \"a84cd4c2-05fc-43f7-8dec-16587923b06f\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-54msh" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.131554 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6pm7\" (UniqueName: \"kubernetes.io/projected/a84cd4c2-05fc-43f7-8dec-16587923b06f-kube-api-access-q6pm7\") pod \"frr-k8s-webhook-server-78b44bf5bb-54msh\" (UID: \"a84cd4c2-05fc-43f7-8dec-16587923b06f\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-54msh" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.132355 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b985\" (UniqueName: \"kubernetes.io/projected/411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2-kube-api-access-2b985\") pod \"frr-k8s-t48sj\" (UID: \"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2\") " pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.150024 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.175246 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-54msh" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.206848 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4dbb64a-0dcd-47cd-bc72-8c9acb096464-cert\") pod \"controller-69bbfbf88f-mrscb\" (UID: \"a4dbb64a-0dcd-47cd-bc72-8c9acb096464\") " pod="metallb-system/controller-69bbfbf88f-mrscb" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.206937 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsx64\" (UniqueName: \"kubernetes.io/projected/a4dbb64a-0dcd-47cd-bc72-8c9acb096464-kube-api-access-dsx64\") pod \"controller-69bbfbf88f-mrscb\" (UID: \"a4dbb64a-0dcd-47cd-bc72-8c9acb096464\") " pod="metallb-system/controller-69bbfbf88f-mrscb" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.206990 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4dbb64a-0dcd-47cd-bc72-8c9acb096464-metrics-certs\") pod \"controller-69bbfbf88f-mrscb\" (UID: \"a4dbb64a-0dcd-47cd-bc72-8c9acb096464\") " pod="metallb-system/controller-69bbfbf88f-mrscb" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.207023 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fcd8dd0f-8522-4ce8-873b-a7821cba8bd7-metrics-certs\") pod \"speaker-97svr\" (UID: \"fcd8dd0f-8522-4ce8-873b-a7821cba8bd7\") " pod="metallb-system/speaker-97svr" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.207047 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjz28\" (UniqueName: \"kubernetes.io/projected/fcd8dd0f-8522-4ce8-873b-a7821cba8bd7-kube-api-access-bjz28\") pod \"speaker-97svr\" (UID: \"fcd8dd0f-8522-4ce8-873b-a7821cba8bd7\") " pod="metallb-system/speaker-97svr" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.207100 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fcd8dd0f-8522-4ce8-873b-a7821cba8bd7-memberlist\") pod \"speaker-97svr\" (UID: \"fcd8dd0f-8522-4ce8-873b-a7821cba8bd7\") " pod="metallb-system/speaker-97svr" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.207150 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fcd8dd0f-8522-4ce8-873b-a7821cba8bd7-metallb-excludel2\") pod \"speaker-97svr\" (UID: \"fcd8dd0f-8522-4ce8-873b-a7821cba8bd7\") " pod="metallb-system/speaker-97svr" Feb 23 10:19:44 crc kubenswrapper[4904]: E0223 10:19:44.207954 4904 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 23 10:19:44 crc kubenswrapper[4904]: E0223 10:19:44.208043 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fcd8dd0f-8522-4ce8-873b-a7821cba8bd7-memberlist podName:fcd8dd0f-8522-4ce8-873b-a7821cba8bd7 nodeName:}" failed. No retries permitted until 2026-02-23 10:19:44.708020194 +0000 UTC m=+818.128393707 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/fcd8dd0f-8522-4ce8-873b-a7821cba8bd7-memberlist") pod "speaker-97svr" (UID: "fcd8dd0f-8522-4ce8-873b-a7821cba8bd7") : secret "metallb-memberlist" not found Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.208482 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fcd8dd0f-8522-4ce8-873b-a7821cba8bd7-metallb-excludel2\") pod \"speaker-97svr\" (UID: \"fcd8dd0f-8522-4ce8-873b-a7821cba8bd7\") " pod="metallb-system/speaker-97svr" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.213442 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4dbb64a-0dcd-47cd-bc72-8c9acb096464-metrics-certs\") pod \"controller-69bbfbf88f-mrscb\" (UID: \"a4dbb64a-0dcd-47cd-bc72-8c9acb096464\") " pod="metallb-system/controller-69bbfbf88f-mrscb" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.214231 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fcd8dd0f-8522-4ce8-873b-a7821cba8bd7-metrics-certs\") pod \"speaker-97svr\" (UID: \"fcd8dd0f-8522-4ce8-873b-a7821cba8bd7\") " pod="metallb-system/speaker-97svr" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.214622 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.222017 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4dbb64a-0dcd-47cd-bc72-8c9acb096464-cert\") pod \"controller-69bbfbf88f-mrscb\" (UID: \"a4dbb64a-0dcd-47cd-bc72-8c9acb096464\") " pod="metallb-system/controller-69bbfbf88f-mrscb" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.224866 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjz28\" (UniqueName: \"kubernetes.io/projected/fcd8dd0f-8522-4ce8-873b-a7821cba8bd7-kube-api-access-bjz28\") pod \"speaker-97svr\" (UID: \"fcd8dd0f-8522-4ce8-873b-a7821cba8bd7\") " pod="metallb-system/speaker-97svr" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.236051 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsx64\" (UniqueName: \"kubernetes.io/projected/a4dbb64a-0dcd-47cd-bc72-8c9acb096464-kube-api-access-dsx64\") pod \"controller-69bbfbf88f-mrscb\" (UID: \"a4dbb64a-0dcd-47cd-bc72-8c9acb096464\") " pod="metallb-system/controller-69bbfbf88f-mrscb" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.276434 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-mrscb" Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.484582 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t48sj" event={"ID":"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2","Type":"ContainerStarted","Data":"7939782bb30a88b89335341fce7119e22f01b26e4588754cf2382af76aa73bbd"} Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.557348 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-mrscb"] Feb 23 10:19:44 crc kubenswrapper[4904]: W0223 10:19:44.560892 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4dbb64a_0dcd_47cd_bc72_8c9acb096464.slice/crio-6111a37f96016516efb27cfd22a898a053106310dca3d881a8e8d8b5df1a9fee WatchSource:0}: Error finding container 6111a37f96016516efb27cfd22a898a053106310dca3d881a8e8d8b5df1a9fee: Status 404 returned error can't find the container with id 6111a37f96016516efb27cfd22a898a053106310dca3d881a8e8d8b5df1a9fee Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.669798 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-54msh"] Feb 23 10:19:44 crc kubenswrapper[4904]: I0223 10:19:44.714607 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fcd8dd0f-8522-4ce8-873b-a7821cba8bd7-memberlist\") pod \"speaker-97svr\" (UID: \"fcd8dd0f-8522-4ce8-873b-a7821cba8bd7\") " pod="metallb-system/speaker-97svr" Feb 23 10:19:44 crc kubenswrapper[4904]: E0223 10:19:44.714790 4904 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 23 10:19:44 crc kubenswrapper[4904]: E0223 10:19:44.714849 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fcd8dd0f-8522-4ce8-873b-a7821cba8bd7-memberlist podName:fcd8dd0f-8522-4ce8-873b-a7821cba8bd7 nodeName:}" failed. No retries permitted until 2026-02-23 10:19:45.714834347 +0000 UTC m=+819.135207860 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/fcd8dd0f-8522-4ce8-873b-a7821cba8bd7-memberlist") pod "speaker-97svr" (UID: "fcd8dd0f-8522-4ce8-873b-a7821cba8bd7") : secret "metallb-memberlist" not found Feb 23 10:19:45 crc kubenswrapper[4904]: I0223 10:19:45.494086 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-mrscb" event={"ID":"a4dbb64a-0dcd-47cd-bc72-8c9acb096464","Type":"ContainerStarted","Data":"006526c4e86e353c412883753ea733e8994a1364eec86d9604bc351507ee29fc"} Feb 23 10:19:45 crc kubenswrapper[4904]: I0223 10:19:45.495247 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-mrscb" event={"ID":"a4dbb64a-0dcd-47cd-bc72-8c9acb096464","Type":"ContainerStarted","Data":"a8e912ca625f62240c1cdbfb0fdc28ee46c659791659b38c471e73142d7613e0"} Feb 23 10:19:45 crc kubenswrapper[4904]: I0223 10:19:45.495314 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-mrscb" event={"ID":"a4dbb64a-0dcd-47cd-bc72-8c9acb096464","Type":"ContainerStarted","Data":"6111a37f96016516efb27cfd22a898a053106310dca3d881a8e8d8b5df1a9fee"} Feb 23 10:19:45 crc kubenswrapper[4904]: I0223 10:19:45.495376 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-mrscb" Feb 23 10:19:45 crc kubenswrapper[4904]: I0223 10:19:45.495885 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-54msh" event={"ID":"a84cd4c2-05fc-43f7-8dec-16587923b06f","Type":"ContainerStarted","Data":"062847d97bf560ec9ddc570727ed56f78e4690ccd1269d789022097c487f0749"} Feb 23 10:19:45 crc kubenswrapper[4904]: I0223 10:19:45.733245 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fcd8dd0f-8522-4ce8-873b-a7821cba8bd7-memberlist\") pod \"speaker-97svr\" (UID: \"fcd8dd0f-8522-4ce8-873b-a7821cba8bd7\") " pod="metallb-system/speaker-97svr" Feb 23 10:19:45 crc kubenswrapper[4904]: I0223 10:19:45.739258 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fcd8dd0f-8522-4ce8-873b-a7821cba8bd7-memberlist\") pod \"speaker-97svr\" (UID: \"fcd8dd0f-8522-4ce8-873b-a7821cba8bd7\") " pod="metallb-system/speaker-97svr" Feb 23 10:19:45 crc kubenswrapper[4904]: I0223 10:19:45.766289 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-97svr" Feb 23 10:19:46 crc kubenswrapper[4904]: I0223 10:19:46.513276 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-97svr" event={"ID":"fcd8dd0f-8522-4ce8-873b-a7821cba8bd7","Type":"ContainerStarted","Data":"dd3932395287711995acf9b6950a6075cd7d3abe2f7bfc6accd66d29570177c4"} Feb 23 10:19:46 crc kubenswrapper[4904]: I0223 10:19:46.513324 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-97svr" event={"ID":"fcd8dd0f-8522-4ce8-873b-a7821cba8bd7","Type":"ContainerStarted","Data":"dd107aad81477728cd6c2ee8e92e28fe49a8f73595a16f6a66be8f37a3777aff"} Feb 23 10:19:47 crc kubenswrapper[4904]: I0223 10:19:47.288952 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-mrscb" podStartSLOduration=4.288932997 podStartE2EDuration="4.288932997s" podCreationTimestamp="2026-02-23 10:19:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:19:45.517730391 +0000 UTC m=+818.938103914" watchObservedRunningTime="2026-02-23 10:19:47.288932997 +0000 UTC m=+820.709306510" Feb 23 10:19:47 crc kubenswrapper[4904]: I0223 10:19:47.398299 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:19:47 crc kubenswrapper[4904]: I0223 10:19:47.398379 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:19:47 crc kubenswrapper[4904]: I0223 10:19:47.398440 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:19:47 crc kubenswrapper[4904]: I0223 10:19:47.399328 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5bd1bf756dbf679d7fd1d8585fe9574e3cbefdb7a18e5dc940344b15e289c6d4"} pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 10:19:47 crc kubenswrapper[4904]: I0223 10:19:47.399410 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" containerID="cri-o://5bd1bf756dbf679d7fd1d8585fe9574e3cbefdb7a18e5dc940344b15e289c6d4" gracePeriod=600 Feb 23 10:19:47 crc kubenswrapper[4904]: I0223 10:19:47.530755 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-97svr" event={"ID":"fcd8dd0f-8522-4ce8-873b-a7821cba8bd7","Type":"ContainerStarted","Data":"21d33691bc185272455f4bd98c3584558953fc34105635ed3abb3dbafc2b30b5"} Feb 23 10:19:47 crc kubenswrapper[4904]: I0223 10:19:47.531233 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-97svr" Feb 23 10:19:48 crc kubenswrapper[4904]: I0223 10:19:48.542534 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerDied","Data":"5bd1bf756dbf679d7fd1d8585fe9574e3cbefdb7a18e5dc940344b15e289c6d4"} Feb 23 10:19:48 crc kubenswrapper[4904]: I0223 10:19:48.543270 4904 scope.go:117] "RemoveContainer" containerID="5c2bc8b3e78a2b6bca2525dc93766b010390fb3eb8142d793d1bb25245ce12c0" Feb 23 10:19:48 crc kubenswrapper[4904]: I0223 10:19:48.542468 4904 generic.go:334] "Generic (PLEG): container finished" podID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerID="5bd1bf756dbf679d7fd1d8585fe9574e3cbefdb7a18e5dc940344b15e289c6d4" exitCode=0 Feb 23 10:19:48 crc kubenswrapper[4904]: I0223 10:19:48.544902 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerStarted","Data":"65119bff18e2a117e57ca57b960a2723a6ad4c2ce44063bd803ebeebee5b384d"} Feb 23 10:19:48 crc kubenswrapper[4904]: I0223 10:19:48.561563 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-97svr" podStartSLOduration=5.561520733 podStartE2EDuration="5.561520733s" podCreationTimestamp="2026-02-23 10:19:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:19:47.565264279 +0000 UTC m=+820.985637792" watchObservedRunningTime="2026-02-23 10:19:48.561520733 +0000 UTC m=+821.981894256" Feb 23 10:19:53 crc kubenswrapper[4904]: I0223 10:19:53.577493 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-54msh" event={"ID":"a84cd4c2-05fc-43f7-8dec-16587923b06f","Type":"ContainerStarted","Data":"e88303f6a59761ab923c7829d05b955d4c726ac793f0f46ce170e2b1440ac016"} Feb 23 10:19:53 crc kubenswrapper[4904]: I0223 10:19:53.578485 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-54msh" Feb 23 10:19:53 crc kubenswrapper[4904]: I0223 10:19:53.580004 4904 generic.go:334] "Generic (PLEG): container finished" podID="411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2" containerID="1fbb540010ded82c5522b2ebefe72b1401b455317b7ac586884c68ae21933c96" exitCode=0 Feb 23 10:19:53 crc kubenswrapper[4904]: I0223 10:19:53.580039 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t48sj" event={"ID":"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2","Type":"ContainerDied","Data":"1fbb540010ded82c5522b2ebefe72b1401b455317b7ac586884c68ae21933c96"} Feb 23 10:19:53 crc kubenswrapper[4904]: I0223 10:19:53.605299 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-54msh" podStartSLOduration=2.76540256 podStartE2EDuration="10.605283064s" podCreationTimestamp="2026-02-23 10:19:43 +0000 UTC" firstStartedPulling="2026-02-23 10:19:44.670152361 +0000 UTC m=+818.090525874" lastFinishedPulling="2026-02-23 10:19:52.510032865 +0000 UTC m=+825.930406378" observedRunningTime="2026-02-23 10:19:53.602831595 +0000 UTC m=+827.023205108" watchObservedRunningTime="2026-02-23 10:19:53.605283064 +0000 UTC m=+827.025656577" Feb 23 10:19:54 crc kubenswrapper[4904]: I0223 10:19:54.280656 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-mrscb" Feb 23 10:19:54 crc kubenswrapper[4904]: I0223 10:19:54.589265 4904 generic.go:334] "Generic (PLEG): container finished" podID="411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2" containerID="374f9469b20c83ebf6e0a892e70e355f077c79046b002b2c542a7ee86d6c3bd2" exitCode=0 Feb 23 10:19:54 crc kubenswrapper[4904]: I0223 10:19:54.590373 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t48sj" event={"ID":"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2","Type":"ContainerDied","Data":"374f9469b20c83ebf6e0a892e70e355f077c79046b002b2c542a7ee86d6c3bd2"} Feb 23 10:19:55 crc kubenswrapper[4904]: I0223 10:19:55.599253 4904 generic.go:334] "Generic (PLEG): container finished" podID="411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2" containerID="d4ca8298f06149962b0012a3e9daa3fbfb7c209e2ae8527156768bb83b6424a9" exitCode=0 Feb 23 10:19:55 crc kubenswrapper[4904]: I0223 10:19:55.599301 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t48sj" event={"ID":"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2","Type":"ContainerDied","Data":"d4ca8298f06149962b0012a3e9daa3fbfb7c209e2ae8527156768bb83b6424a9"} Feb 23 10:19:56 crc kubenswrapper[4904]: I0223 10:19:56.608804 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t48sj" event={"ID":"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2","Type":"ContainerStarted","Data":"24c744137d3d8b588221e6a0422a0a3bc2d2cbae6f3c8f02029893a91bb5778c"} Feb 23 10:19:56 crc kubenswrapper[4904]: I0223 10:19:56.609752 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t48sj" event={"ID":"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2","Type":"ContainerStarted","Data":"eaef39d377c53a6b3c93c064bce468261226094eb7c3d2ff3e8912b4c17a9e59"} Feb 23 10:19:56 crc kubenswrapper[4904]: I0223 10:19:56.609832 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t48sj" event={"ID":"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2","Type":"ContainerStarted","Data":"9e0ed4ffe12b437e69801df87d46208a2f6eaa976984d447fd274a7054bd18c2"} Feb 23 10:19:56 crc kubenswrapper[4904]: I0223 10:19:56.609896 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t48sj" event={"ID":"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2","Type":"ContainerStarted","Data":"3c941403192fb94060be955c1c9e5f2f76cde58499a160dfde922f7b722e9837"} Feb 23 10:19:56 crc kubenswrapper[4904]: I0223 10:19:56.609950 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t48sj" event={"ID":"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2","Type":"ContainerStarted","Data":"d0c88ff75e67cf7db2308a200492aef1a43edf1e14c431f194e013a276940b0c"} Feb 23 10:19:57 crc kubenswrapper[4904]: I0223 10:19:57.632026 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t48sj" event={"ID":"411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2","Type":"ContainerStarted","Data":"576c0b32c5cee550534ac4c7be65f98a38a0410794badeef1ea4f5df691946e2"} Feb 23 10:19:57 crc kubenswrapper[4904]: I0223 10:19:57.632711 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:57 crc kubenswrapper[4904]: I0223 10:19:57.692681 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-t48sj" podStartSLOduration=6.540710413 podStartE2EDuration="14.692645481s" podCreationTimestamp="2026-02-23 10:19:43 +0000 UTC" firstStartedPulling="2026-02-23 10:19:44.377195848 +0000 UTC m=+817.797569361" lastFinishedPulling="2026-02-23 10:19:52.529130916 +0000 UTC m=+825.949504429" observedRunningTime="2026-02-23 10:19:57.686473467 +0000 UTC m=+831.106846980" watchObservedRunningTime="2026-02-23 10:19:57.692645481 +0000 UTC m=+831.113019004" Feb 23 10:19:59 crc kubenswrapper[4904]: I0223 10:19:59.151377 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-t48sj" Feb 23 10:19:59 crc kubenswrapper[4904]: I0223 10:19:59.193465 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-t48sj" Feb 23 10:20:04 crc kubenswrapper[4904]: I0223 10:20:04.192662 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-54msh" Feb 23 10:20:05 crc kubenswrapper[4904]: I0223 10:20:05.771055 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-97svr" Feb 23 10:20:08 crc kubenswrapper[4904]: I0223 10:20:08.556637 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-7flqs"] Feb 23 10:20:08 crc kubenswrapper[4904]: I0223 10:20:08.557827 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7flqs" Feb 23 10:20:08 crc kubenswrapper[4904]: I0223 10:20:08.560306 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-n4xx5" Feb 23 10:20:08 crc kubenswrapper[4904]: I0223 10:20:08.561261 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 23 10:20:08 crc kubenswrapper[4904]: I0223 10:20:08.563943 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 23 10:20:08 crc kubenswrapper[4904]: I0223 10:20:08.578420 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7flqs"] Feb 23 10:20:08 crc kubenswrapper[4904]: I0223 10:20:08.679933 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klqw8\" (UniqueName: \"kubernetes.io/projected/9b3e5c52-6bf4-4784-8c6b-3733c17a0a06-kube-api-access-klqw8\") pod \"openstack-operator-index-7flqs\" (UID: \"9b3e5c52-6bf4-4784-8c6b-3733c17a0a06\") " pod="openstack-operators/openstack-operator-index-7flqs" Feb 23 10:20:08 crc kubenswrapper[4904]: I0223 10:20:08.781369 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klqw8\" (UniqueName: \"kubernetes.io/projected/9b3e5c52-6bf4-4784-8c6b-3733c17a0a06-kube-api-access-klqw8\") pod \"openstack-operator-index-7flqs\" (UID: \"9b3e5c52-6bf4-4784-8c6b-3733c17a0a06\") " pod="openstack-operators/openstack-operator-index-7flqs" Feb 23 10:20:08 crc kubenswrapper[4904]: I0223 10:20:08.804305 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klqw8\" (UniqueName: \"kubernetes.io/projected/9b3e5c52-6bf4-4784-8c6b-3733c17a0a06-kube-api-access-klqw8\") pod \"openstack-operator-index-7flqs\" (UID: \"9b3e5c52-6bf4-4784-8c6b-3733c17a0a06\") " pod="openstack-operators/openstack-operator-index-7flqs" Feb 23 10:20:08 crc kubenswrapper[4904]: I0223 10:20:08.891017 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7flqs" Feb 23 10:20:09 crc kubenswrapper[4904]: I0223 10:20:09.325355 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7flqs"] Feb 23 10:20:09 crc kubenswrapper[4904]: I0223 10:20:09.737282 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7flqs" event={"ID":"9b3e5c52-6bf4-4784-8c6b-3733c17a0a06","Type":"ContainerStarted","Data":"355c0d974a364977f0452bef4dbf403e405a44bc737da02375e49f0aefa01f5a"} Feb 23 10:20:11 crc kubenswrapper[4904]: I0223 10:20:11.929801 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-7flqs"] Feb 23 10:20:12 crc kubenswrapper[4904]: I0223 10:20:12.535220 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-56xvj"] Feb 23 10:20:12 crc kubenswrapper[4904]: I0223 10:20:12.536428 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-56xvj" Feb 23 10:20:12 crc kubenswrapper[4904]: I0223 10:20:12.588609 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-56xvj"] Feb 23 10:20:12 crc kubenswrapper[4904]: I0223 10:20:12.635439 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcdq2\" (UniqueName: \"kubernetes.io/projected/0ae66b0c-ec0d-42ee-902a-280f8a586cf2-kube-api-access-dcdq2\") pod \"openstack-operator-index-56xvj\" (UID: \"0ae66b0c-ec0d-42ee-902a-280f8a586cf2\") " pod="openstack-operators/openstack-operator-index-56xvj" Feb 23 10:20:12 crc kubenswrapper[4904]: I0223 10:20:12.736696 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcdq2\" (UniqueName: \"kubernetes.io/projected/0ae66b0c-ec0d-42ee-902a-280f8a586cf2-kube-api-access-dcdq2\") pod \"openstack-operator-index-56xvj\" (UID: \"0ae66b0c-ec0d-42ee-902a-280f8a586cf2\") " pod="openstack-operators/openstack-operator-index-56xvj" Feb 23 10:20:12 crc kubenswrapper[4904]: I0223 10:20:12.756502 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcdq2\" (UniqueName: \"kubernetes.io/projected/0ae66b0c-ec0d-42ee-902a-280f8a586cf2-kube-api-access-dcdq2\") pod \"openstack-operator-index-56xvj\" (UID: \"0ae66b0c-ec0d-42ee-902a-280f8a586cf2\") " pod="openstack-operators/openstack-operator-index-56xvj" Feb 23 10:20:12 crc kubenswrapper[4904]: I0223 10:20:12.776974 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7flqs" event={"ID":"9b3e5c52-6bf4-4784-8c6b-3733c17a0a06","Type":"ContainerStarted","Data":"2b320a0da90002cef63538cead2077d17defff6dc490d91afe98ef8e60e30241"} Feb 23 10:20:12 crc kubenswrapper[4904]: I0223 10:20:12.777539 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-7flqs" podUID="9b3e5c52-6bf4-4784-8c6b-3733c17a0a06" containerName="registry-server" containerID="cri-o://2b320a0da90002cef63538cead2077d17defff6dc490d91afe98ef8e60e30241" gracePeriod=2 Feb 23 10:20:12 crc kubenswrapper[4904]: I0223 10:20:12.800235 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-7flqs" podStartSLOduration=1.839251758 podStartE2EDuration="4.800209998s" podCreationTimestamp="2026-02-23 10:20:08 +0000 UTC" firstStartedPulling="2026-02-23 10:20:09.348281822 +0000 UTC m=+842.768655335" lastFinishedPulling="2026-02-23 10:20:12.309240052 +0000 UTC m=+845.729613575" observedRunningTime="2026-02-23 10:20:12.796593426 +0000 UTC m=+846.216966939" watchObservedRunningTime="2026-02-23 10:20:12.800209998 +0000 UTC m=+846.220583511" Feb 23 10:20:12 crc kubenswrapper[4904]: I0223 10:20:12.854245 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-56xvj" Feb 23 10:20:13 crc kubenswrapper[4904]: I0223 10:20:13.176350 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7flqs" Feb 23 10:20:13 crc kubenswrapper[4904]: I0223 10:20:13.243051 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klqw8\" (UniqueName: \"kubernetes.io/projected/9b3e5c52-6bf4-4784-8c6b-3733c17a0a06-kube-api-access-klqw8\") pod \"9b3e5c52-6bf4-4784-8c6b-3733c17a0a06\" (UID: \"9b3e5c52-6bf4-4784-8c6b-3733c17a0a06\") " Feb 23 10:20:13 crc kubenswrapper[4904]: I0223 10:20:13.249084 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b3e5c52-6bf4-4784-8c6b-3733c17a0a06-kube-api-access-klqw8" (OuterVolumeSpecName: "kube-api-access-klqw8") pod "9b3e5c52-6bf4-4784-8c6b-3733c17a0a06" (UID: "9b3e5c52-6bf4-4784-8c6b-3733c17a0a06"). InnerVolumeSpecName "kube-api-access-klqw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:20:13 crc kubenswrapper[4904]: I0223 10:20:13.344926 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klqw8\" (UniqueName: \"kubernetes.io/projected/9b3e5c52-6bf4-4784-8c6b-3733c17a0a06-kube-api-access-klqw8\") on node \"crc\" DevicePath \"\"" Feb 23 10:20:13 crc kubenswrapper[4904]: I0223 10:20:13.359041 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-56xvj"] Feb 23 10:20:13 crc kubenswrapper[4904]: W0223 10:20:13.359690 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ae66b0c_ec0d_42ee_902a_280f8a586cf2.slice/crio-5d4a7649088220b0b6c3595f6cb06f1bf6518f37830942bed6228c7e5f8688b5 WatchSource:0}: Error finding container 5d4a7649088220b0b6c3595f6cb06f1bf6518f37830942bed6228c7e5f8688b5: Status 404 returned error can't find the container with id 5d4a7649088220b0b6c3595f6cb06f1bf6518f37830942bed6228c7e5f8688b5 Feb 23 10:20:13 crc kubenswrapper[4904]: I0223 10:20:13.786215 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-56xvj" event={"ID":"0ae66b0c-ec0d-42ee-902a-280f8a586cf2","Type":"ContainerStarted","Data":"198d4b49ca4d034a301acfa4ff059fd1d04c684b64440d883ef57abf02d5e599"} Feb 23 10:20:13 crc kubenswrapper[4904]: I0223 10:20:13.786779 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-56xvj" event={"ID":"0ae66b0c-ec0d-42ee-902a-280f8a586cf2","Type":"ContainerStarted","Data":"5d4a7649088220b0b6c3595f6cb06f1bf6518f37830942bed6228c7e5f8688b5"} Feb 23 10:20:13 crc kubenswrapper[4904]: I0223 10:20:13.788497 4904 generic.go:334] "Generic (PLEG): container finished" podID="9b3e5c52-6bf4-4784-8c6b-3733c17a0a06" containerID="2b320a0da90002cef63538cead2077d17defff6dc490d91afe98ef8e60e30241" exitCode=0 Feb 23 10:20:13 crc kubenswrapper[4904]: I0223 10:20:13.788569 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7flqs" Feb 23 10:20:13 crc kubenswrapper[4904]: I0223 10:20:13.788562 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7flqs" event={"ID":"9b3e5c52-6bf4-4784-8c6b-3733c17a0a06","Type":"ContainerDied","Data":"2b320a0da90002cef63538cead2077d17defff6dc490d91afe98ef8e60e30241"} Feb 23 10:20:13 crc kubenswrapper[4904]: I0223 10:20:13.788777 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7flqs" event={"ID":"9b3e5c52-6bf4-4784-8c6b-3733c17a0a06","Type":"ContainerDied","Data":"355c0d974a364977f0452bef4dbf403e405a44bc737da02375e49f0aefa01f5a"} Feb 23 10:20:13 crc kubenswrapper[4904]: I0223 10:20:13.788808 4904 scope.go:117] "RemoveContainer" containerID="2b320a0da90002cef63538cead2077d17defff6dc490d91afe98ef8e60e30241" Feb 23 10:20:13 crc kubenswrapper[4904]: I0223 10:20:13.807687 4904 scope.go:117] "RemoveContainer" containerID="2b320a0da90002cef63538cead2077d17defff6dc490d91afe98ef8e60e30241" Feb 23 10:20:13 crc kubenswrapper[4904]: E0223 10:20:13.809045 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b320a0da90002cef63538cead2077d17defff6dc490d91afe98ef8e60e30241\": container with ID starting with 2b320a0da90002cef63538cead2077d17defff6dc490d91afe98ef8e60e30241 not found: ID does not exist" containerID="2b320a0da90002cef63538cead2077d17defff6dc490d91afe98ef8e60e30241" Feb 23 10:20:13 crc kubenswrapper[4904]: I0223 10:20:13.809149 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b320a0da90002cef63538cead2077d17defff6dc490d91afe98ef8e60e30241"} err="failed to get container status \"2b320a0da90002cef63538cead2077d17defff6dc490d91afe98ef8e60e30241\": rpc error: code = NotFound desc = could not find container \"2b320a0da90002cef63538cead2077d17defff6dc490d91afe98ef8e60e30241\": container with ID starting with 2b320a0da90002cef63538cead2077d17defff6dc490d91afe98ef8e60e30241 not found: ID does not exist" Feb 23 10:20:13 crc kubenswrapper[4904]: I0223 10:20:13.813120 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-56xvj" podStartSLOduration=1.761465495 podStartE2EDuration="1.813076179s" podCreationTimestamp="2026-02-23 10:20:12 +0000 UTC" firstStartedPulling="2026-02-23 10:20:13.363074844 +0000 UTC m=+846.783448357" lastFinishedPulling="2026-02-23 10:20:13.414685508 +0000 UTC m=+846.835059041" observedRunningTime="2026-02-23 10:20:13.807689466 +0000 UTC m=+847.228062999" watchObservedRunningTime="2026-02-23 10:20:13.813076179 +0000 UTC m=+847.233449682" Feb 23 10:20:13 crc kubenswrapper[4904]: I0223 10:20:13.829822 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-7flqs"] Feb 23 10:20:13 crc kubenswrapper[4904]: I0223 10:20:13.836416 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-7flqs"] Feb 23 10:20:14 crc kubenswrapper[4904]: I0223 10:20:14.155816 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-t48sj" Feb 23 10:20:15 crc kubenswrapper[4904]: I0223 10:20:15.265655 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b3e5c52-6bf4-4784-8c6b-3733c17a0a06" path="/var/lib/kubelet/pods/9b3e5c52-6bf4-4784-8c6b-3733c17a0a06/volumes" Feb 23 10:20:22 crc kubenswrapper[4904]: I0223 10:20:22.855256 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-56xvj" Feb 23 10:20:22 crc kubenswrapper[4904]: I0223 10:20:22.856142 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-56xvj" Feb 23 10:20:22 crc kubenswrapper[4904]: I0223 10:20:22.908436 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-56xvj" Feb 23 10:20:22 crc kubenswrapper[4904]: I0223 10:20:22.957936 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-56xvj" Feb 23 10:20:23 crc kubenswrapper[4904]: I0223 10:20:23.773830 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4"] Feb 23 10:20:23 crc kubenswrapper[4904]: E0223 10:20:23.774903 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b3e5c52-6bf4-4784-8c6b-3733c17a0a06" containerName="registry-server" Feb 23 10:20:23 crc kubenswrapper[4904]: I0223 10:20:23.774937 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b3e5c52-6bf4-4784-8c6b-3733c17a0a06" containerName="registry-server" Feb 23 10:20:23 crc kubenswrapper[4904]: I0223 10:20:23.775205 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b3e5c52-6bf4-4784-8c6b-3733c17a0a06" containerName="registry-server" Feb 23 10:20:23 crc kubenswrapper[4904]: I0223 10:20:23.776709 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4" Feb 23 10:20:23 crc kubenswrapper[4904]: I0223 10:20:23.779948 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4"] Feb 23 10:20:23 crc kubenswrapper[4904]: I0223 10:20:23.786165 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-zrb6w" Feb 23 10:20:23 crc kubenswrapper[4904]: I0223 10:20:23.963515 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz2xg\" (UniqueName: \"kubernetes.io/projected/416f9591-cdad-4b2f-bf2b-a9af67b1b260-kube-api-access-mz2xg\") pod \"37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4\" (UID: \"416f9591-cdad-4b2f-bf2b-a9af67b1b260\") " pod="openstack-operators/37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4" Feb 23 10:20:23 crc kubenswrapper[4904]: I0223 10:20:23.963814 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/416f9591-cdad-4b2f-bf2b-a9af67b1b260-util\") pod \"37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4\" (UID: \"416f9591-cdad-4b2f-bf2b-a9af67b1b260\") " pod="openstack-operators/37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4" Feb 23 10:20:23 crc kubenswrapper[4904]: I0223 10:20:23.963855 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/416f9591-cdad-4b2f-bf2b-a9af67b1b260-bundle\") pod \"37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4\" (UID: \"416f9591-cdad-4b2f-bf2b-a9af67b1b260\") " pod="openstack-operators/37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4" Feb 23 10:20:24 crc kubenswrapper[4904]: I0223 10:20:24.065958 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/416f9591-cdad-4b2f-bf2b-a9af67b1b260-util\") pod \"37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4\" (UID: \"416f9591-cdad-4b2f-bf2b-a9af67b1b260\") " pod="openstack-operators/37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4" Feb 23 10:20:24 crc kubenswrapper[4904]: I0223 10:20:24.066099 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/416f9591-cdad-4b2f-bf2b-a9af67b1b260-bundle\") pod \"37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4\" (UID: \"416f9591-cdad-4b2f-bf2b-a9af67b1b260\") " pod="openstack-operators/37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4" Feb 23 10:20:24 crc kubenswrapper[4904]: I0223 10:20:24.066211 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz2xg\" (UniqueName: \"kubernetes.io/projected/416f9591-cdad-4b2f-bf2b-a9af67b1b260-kube-api-access-mz2xg\") pod \"37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4\" (UID: \"416f9591-cdad-4b2f-bf2b-a9af67b1b260\") " pod="openstack-operators/37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4" Feb 23 10:20:24 crc kubenswrapper[4904]: I0223 10:20:24.066607 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/416f9591-cdad-4b2f-bf2b-a9af67b1b260-bundle\") pod \"37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4\" (UID: \"416f9591-cdad-4b2f-bf2b-a9af67b1b260\") " pod="openstack-operators/37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4" Feb 23 10:20:24 crc kubenswrapper[4904]: I0223 10:20:24.066971 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/416f9591-cdad-4b2f-bf2b-a9af67b1b260-util\") pod \"37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4\" (UID: \"416f9591-cdad-4b2f-bf2b-a9af67b1b260\") " pod="openstack-operators/37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4" Feb 23 10:20:24 crc kubenswrapper[4904]: I0223 10:20:24.089092 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz2xg\" (UniqueName: \"kubernetes.io/projected/416f9591-cdad-4b2f-bf2b-a9af67b1b260-kube-api-access-mz2xg\") pod \"37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4\" (UID: \"416f9591-cdad-4b2f-bf2b-a9af67b1b260\") " pod="openstack-operators/37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4" Feb 23 10:20:24 crc kubenswrapper[4904]: I0223 10:20:24.175475 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4" Feb 23 10:20:24 crc kubenswrapper[4904]: I0223 10:20:24.590668 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4"] Feb 23 10:20:24 crc kubenswrapper[4904]: I0223 10:20:24.897860 4904 generic.go:334] "Generic (PLEG): container finished" podID="416f9591-cdad-4b2f-bf2b-a9af67b1b260" containerID="0bd033bc48dee1dc1b2aa45576e3828b67d4bb4cb29d44bbd3e885386329969c" exitCode=0 Feb 23 10:20:24 crc kubenswrapper[4904]: I0223 10:20:24.898106 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4" event={"ID":"416f9591-cdad-4b2f-bf2b-a9af67b1b260","Type":"ContainerDied","Data":"0bd033bc48dee1dc1b2aa45576e3828b67d4bb4cb29d44bbd3e885386329969c"} Feb 23 10:20:24 crc kubenswrapper[4904]: I0223 10:20:24.898251 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4" event={"ID":"416f9591-cdad-4b2f-bf2b-a9af67b1b260","Type":"ContainerStarted","Data":"e72beab441172011053f54c52b06537fa5e0f990155c96caabf978561f4632d8"} Feb 23 10:20:25 crc kubenswrapper[4904]: I0223 10:20:25.905203 4904 generic.go:334] "Generic (PLEG): container finished" podID="416f9591-cdad-4b2f-bf2b-a9af67b1b260" containerID="ef648733f8beba3d3cb3bd80f52a5a848ff3546d693925a400777e09f8d2155e" exitCode=0 Feb 23 10:20:25 crc kubenswrapper[4904]: I0223 10:20:25.905277 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4" event={"ID":"416f9591-cdad-4b2f-bf2b-a9af67b1b260","Type":"ContainerDied","Data":"ef648733f8beba3d3cb3bd80f52a5a848ff3546d693925a400777e09f8d2155e"} Feb 23 10:20:26 crc kubenswrapper[4904]: I0223 10:20:26.917299 4904 generic.go:334] "Generic (PLEG): container finished" podID="416f9591-cdad-4b2f-bf2b-a9af67b1b260" containerID="0a8aaafe3f8086572927aca32bc5f924f3558e31683ddf0f9c682729617d7f4a" exitCode=0 Feb 23 10:20:26 crc kubenswrapper[4904]: I0223 10:20:26.917467 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4" event={"ID":"416f9591-cdad-4b2f-bf2b-a9af67b1b260","Type":"ContainerDied","Data":"0a8aaafe3f8086572927aca32bc5f924f3558e31683ddf0f9c682729617d7f4a"} Feb 23 10:20:28 crc kubenswrapper[4904]: I0223 10:20:28.173525 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4" Feb 23 10:20:28 crc kubenswrapper[4904]: I0223 10:20:28.332874 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz2xg\" (UniqueName: \"kubernetes.io/projected/416f9591-cdad-4b2f-bf2b-a9af67b1b260-kube-api-access-mz2xg\") pod \"416f9591-cdad-4b2f-bf2b-a9af67b1b260\" (UID: \"416f9591-cdad-4b2f-bf2b-a9af67b1b260\") " Feb 23 10:20:28 crc kubenswrapper[4904]: I0223 10:20:28.332932 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/416f9591-cdad-4b2f-bf2b-a9af67b1b260-bundle\") pod \"416f9591-cdad-4b2f-bf2b-a9af67b1b260\" (UID: \"416f9591-cdad-4b2f-bf2b-a9af67b1b260\") " Feb 23 10:20:28 crc kubenswrapper[4904]: I0223 10:20:28.333025 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/416f9591-cdad-4b2f-bf2b-a9af67b1b260-util\") pod \"416f9591-cdad-4b2f-bf2b-a9af67b1b260\" (UID: \"416f9591-cdad-4b2f-bf2b-a9af67b1b260\") " Feb 23 10:20:28 crc kubenswrapper[4904]: I0223 10:20:28.334146 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/416f9591-cdad-4b2f-bf2b-a9af67b1b260-bundle" (OuterVolumeSpecName: "bundle") pod "416f9591-cdad-4b2f-bf2b-a9af67b1b260" (UID: "416f9591-cdad-4b2f-bf2b-a9af67b1b260"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:20:28 crc kubenswrapper[4904]: I0223 10:20:28.338067 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/416f9591-cdad-4b2f-bf2b-a9af67b1b260-kube-api-access-mz2xg" (OuterVolumeSpecName: "kube-api-access-mz2xg") pod "416f9591-cdad-4b2f-bf2b-a9af67b1b260" (UID: "416f9591-cdad-4b2f-bf2b-a9af67b1b260"). InnerVolumeSpecName "kube-api-access-mz2xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:20:28 crc kubenswrapper[4904]: I0223 10:20:28.347034 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/416f9591-cdad-4b2f-bf2b-a9af67b1b260-util" (OuterVolumeSpecName: "util") pod "416f9591-cdad-4b2f-bf2b-a9af67b1b260" (UID: "416f9591-cdad-4b2f-bf2b-a9af67b1b260"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:20:28 crc kubenswrapper[4904]: I0223 10:20:28.435567 4904 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/416f9591-cdad-4b2f-bf2b-a9af67b1b260-util\") on node \"crc\" DevicePath \"\"" Feb 23 10:20:28 crc kubenswrapper[4904]: I0223 10:20:28.435782 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz2xg\" (UniqueName: \"kubernetes.io/projected/416f9591-cdad-4b2f-bf2b-a9af67b1b260-kube-api-access-mz2xg\") on node \"crc\" DevicePath \"\"" Feb 23 10:20:28 crc kubenswrapper[4904]: I0223 10:20:28.435971 4904 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/416f9591-cdad-4b2f-bf2b-a9af67b1b260-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:20:28 crc kubenswrapper[4904]: I0223 10:20:28.934889 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4" event={"ID":"416f9591-cdad-4b2f-bf2b-a9af67b1b260","Type":"ContainerDied","Data":"e72beab441172011053f54c52b06537fa5e0f990155c96caabf978561f4632d8"} Feb 23 10:20:28 crc kubenswrapper[4904]: I0223 10:20:28.935275 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e72beab441172011053f54c52b06537fa5e0f990155c96caabf978561f4632d8" Feb 23 10:20:28 crc kubenswrapper[4904]: I0223 10:20:28.934995 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4" Feb 23 10:20:35 crc kubenswrapper[4904]: I0223 10:20:35.790285 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-794bd488b9-t978s"] Feb 23 10:20:35 crc kubenswrapper[4904]: E0223 10:20:35.790885 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416f9591-cdad-4b2f-bf2b-a9af67b1b260" containerName="pull" Feb 23 10:20:35 crc kubenswrapper[4904]: I0223 10:20:35.790900 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="416f9591-cdad-4b2f-bf2b-a9af67b1b260" containerName="pull" Feb 23 10:20:35 crc kubenswrapper[4904]: E0223 10:20:35.790920 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416f9591-cdad-4b2f-bf2b-a9af67b1b260" containerName="extract" Feb 23 10:20:35 crc kubenswrapper[4904]: I0223 10:20:35.790927 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="416f9591-cdad-4b2f-bf2b-a9af67b1b260" containerName="extract" Feb 23 10:20:35 crc kubenswrapper[4904]: E0223 10:20:35.790936 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416f9591-cdad-4b2f-bf2b-a9af67b1b260" containerName="util" Feb 23 10:20:35 crc kubenswrapper[4904]: I0223 10:20:35.790942 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="416f9591-cdad-4b2f-bf2b-a9af67b1b260" containerName="util" Feb 23 10:20:35 crc kubenswrapper[4904]: I0223 10:20:35.791050 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="416f9591-cdad-4b2f-bf2b-a9af67b1b260" containerName="extract" Feb 23 10:20:35 crc kubenswrapper[4904]: I0223 10:20:35.791512 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-794bd488b9-t978s" Feb 23 10:20:35 crc kubenswrapper[4904]: I0223 10:20:35.794600 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-5rfgz" Feb 23 10:20:35 crc kubenswrapper[4904]: I0223 10:20:35.822239 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-794bd488b9-t978s"] Feb 23 10:20:35 crc kubenswrapper[4904]: I0223 10:20:35.937923 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxv5q\" (UniqueName: \"kubernetes.io/projected/e83028a5-c494-4797-8675-c6fe7d90a156-kube-api-access-zxv5q\") pod \"openstack-operator-controller-init-794bd488b9-t978s\" (UID: \"e83028a5-c494-4797-8675-c6fe7d90a156\") " pod="openstack-operators/openstack-operator-controller-init-794bd488b9-t978s" Feb 23 10:20:36 crc kubenswrapper[4904]: I0223 10:20:36.039481 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxv5q\" (UniqueName: \"kubernetes.io/projected/e83028a5-c494-4797-8675-c6fe7d90a156-kube-api-access-zxv5q\") pod \"openstack-operator-controller-init-794bd488b9-t978s\" (UID: \"e83028a5-c494-4797-8675-c6fe7d90a156\") " pod="openstack-operators/openstack-operator-controller-init-794bd488b9-t978s" Feb 23 10:20:36 crc kubenswrapper[4904]: I0223 10:20:36.064357 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxv5q\" (UniqueName: \"kubernetes.io/projected/e83028a5-c494-4797-8675-c6fe7d90a156-kube-api-access-zxv5q\") pod \"openstack-operator-controller-init-794bd488b9-t978s\" (UID: \"e83028a5-c494-4797-8675-c6fe7d90a156\") " pod="openstack-operators/openstack-operator-controller-init-794bd488b9-t978s" Feb 23 10:20:36 crc kubenswrapper[4904]: I0223 10:20:36.107650 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-794bd488b9-t978s" Feb 23 10:20:36 crc kubenswrapper[4904]: I0223 10:20:36.336233 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-794bd488b9-t978s"] Feb 23 10:20:36 crc kubenswrapper[4904]: I0223 10:20:36.996154 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-794bd488b9-t978s" event={"ID":"e83028a5-c494-4797-8675-c6fe7d90a156","Type":"ContainerStarted","Data":"206de74fcfd0365a0df5d0e2e76e981d197df730dbb35cef453727e8c46d8efc"} Feb 23 10:20:43 crc kubenswrapper[4904]: I0223 10:20:43.041179 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-794bd488b9-t978s" event={"ID":"e83028a5-c494-4797-8675-c6fe7d90a156","Type":"ContainerStarted","Data":"34ae793ce3bf2f8271d6de3e676f3e5c21ea97423391d7fe557f55334ab55234"} Feb 23 10:20:43 crc kubenswrapper[4904]: I0223 10:20:43.041764 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-794bd488b9-t978s" Feb 23 10:20:43 crc kubenswrapper[4904]: I0223 10:20:43.077329 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-794bd488b9-t978s" podStartSLOduration=2.494394525 podStartE2EDuration="8.077309325s" podCreationTimestamp="2026-02-23 10:20:35 +0000 UTC" firstStartedPulling="2026-02-23 10:20:36.354732488 +0000 UTC m=+869.775106001" lastFinishedPulling="2026-02-23 10:20:41.937647288 +0000 UTC m=+875.358020801" observedRunningTime="2026-02-23 10:20:43.074893616 +0000 UTC m=+876.495267139" watchObservedRunningTime="2026-02-23 10:20:43.077309325 +0000 UTC m=+876.497682838" Feb 23 10:20:56 crc kubenswrapper[4904]: I0223 10:20:56.110575 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-794bd488b9-t978s" Feb 23 10:21:03 crc kubenswrapper[4904]: I0223 10:21:03.301882 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bkmnj"] Feb 23 10:21:03 crc kubenswrapper[4904]: I0223 10:21:03.303640 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bkmnj" Feb 23 10:21:03 crc kubenswrapper[4904]: I0223 10:21:03.337071 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bkmnj"] Feb 23 10:21:03 crc kubenswrapper[4904]: I0223 10:21:03.388483 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad620ec8-50ba-4415-a0fa-c175246707b8-catalog-content\") pod \"redhat-operators-bkmnj\" (UID: \"ad620ec8-50ba-4415-a0fa-c175246707b8\") " pod="openshift-marketplace/redhat-operators-bkmnj" Feb 23 10:21:03 crc kubenswrapper[4904]: I0223 10:21:03.388551 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6mfj\" (UniqueName: \"kubernetes.io/projected/ad620ec8-50ba-4415-a0fa-c175246707b8-kube-api-access-z6mfj\") pod \"redhat-operators-bkmnj\" (UID: \"ad620ec8-50ba-4415-a0fa-c175246707b8\") " pod="openshift-marketplace/redhat-operators-bkmnj" Feb 23 10:21:03 crc kubenswrapper[4904]: I0223 10:21:03.388653 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad620ec8-50ba-4415-a0fa-c175246707b8-utilities\") pod \"redhat-operators-bkmnj\" (UID: \"ad620ec8-50ba-4415-a0fa-c175246707b8\") " pod="openshift-marketplace/redhat-operators-bkmnj" Feb 23 10:21:03 crc kubenswrapper[4904]: I0223 10:21:03.489612 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad620ec8-50ba-4415-a0fa-c175246707b8-catalog-content\") pod \"redhat-operators-bkmnj\" (UID: \"ad620ec8-50ba-4415-a0fa-c175246707b8\") " pod="openshift-marketplace/redhat-operators-bkmnj" Feb 23 10:21:03 crc kubenswrapper[4904]: I0223 10:21:03.489681 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6mfj\" (UniqueName: \"kubernetes.io/projected/ad620ec8-50ba-4415-a0fa-c175246707b8-kube-api-access-z6mfj\") pod \"redhat-operators-bkmnj\" (UID: \"ad620ec8-50ba-4415-a0fa-c175246707b8\") " pod="openshift-marketplace/redhat-operators-bkmnj" Feb 23 10:21:03 crc kubenswrapper[4904]: I0223 10:21:03.489751 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad620ec8-50ba-4415-a0fa-c175246707b8-utilities\") pod \"redhat-operators-bkmnj\" (UID: \"ad620ec8-50ba-4415-a0fa-c175246707b8\") " pod="openshift-marketplace/redhat-operators-bkmnj" Feb 23 10:21:03 crc kubenswrapper[4904]: I0223 10:21:03.490248 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad620ec8-50ba-4415-a0fa-c175246707b8-utilities\") pod \"redhat-operators-bkmnj\" (UID: \"ad620ec8-50ba-4415-a0fa-c175246707b8\") " pod="openshift-marketplace/redhat-operators-bkmnj" Feb 23 10:21:03 crc kubenswrapper[4904]: I0223 10:21:03.490964 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad620ec8-50ba-4415-a0fa-c175246707b8-catalog-content\") pod \"redhat-operators-bkmnj\" (UID: \"ad620ec8-50ba-4415-a0fa-c175246707b8\") " pod="openshift-marketplace/redhat-operators-bkmnj" Feb 23 10:21:03 crc kubenswrapper[4904]: I0223 10:21:03.523256 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6mfj\" (UniqueName: \"kubernetes.io/projected/ad620ec8-50ba-4415-a0fa-c175246707b8-kube-api-access-z6mfj\") pod \"redhat-operators-bkmnj\" (UID: \"ad620ec8-50ba-4415-a0fa-c175246707b8\") " pod="openshift-marketplace/redhat-operators-bkmnj" Feb 23 10:21:03 crc kubenswrapper[4904]: I0223 10:21:03.629940 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bkmnj" Feb 23 10:21:04 crc kubenswrapper[4904]: I0223 10:21:04.136985 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bkmnj"] Feb 23 10:21:04 crc kubenswrapper[4904]: I0223 10:21:04.343567 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkmnj" event={"ID":"ad620ec8-50ba-4415-a0fa-c175246707b8","Type":"ContainerStarted","Data":"3cc162059c0c71b9de2266b8216c72da4a03333d9c757322aec70bcd219345c5"} Feb 23 10:21:04 crc kubenswrapper[4904]: I0223 10:21:04.344133 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkmnj" event={"ID":"ad620ec8-50ba-4415-a0fa-c175246707b8","Type":"ContainerStarted","Data":"e77bd3a33af053614ea140dea794cce6771f51e305247121e00d8be63647b094"} Feb 23 10:21:05 crc kubenswrapper[4904]: I0223 10:21:05.351128 4904 generic.go:334] "Generic (PLEG): container finished" podID="ad620ec8-50ba-4415-a0fa-c175246707b8" containerID="3cc162059c0c71b9de2266b8216c72da4a03333d9c757322aec70bcd219345c5" exitCode=0 Feb 23 10:21:05 crc kubenswrapper[4904]: I0223 10:21:05.351180 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkmnj" event={"ID":"ad620ec8-50ba-4415-a0fa-c175246707b8","Type":"ContainerDied","Data":"3cc162059c0c71b9de2266b8216c72da4a03333d9c757322aec70bcd219345c5"} Feb 23 10:21:06 crc kubenswrapper[4904]: I0223 10:21:06.359509 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkmnj" event={"ID":"ad620ec8-50ba-4415-a0fa-c175246707b8","Type":"ContainerStarted","Data":"688b96e014994a7914657789bff954d111945d35bcd909f812a07117694660f2"} Feb 23 10:21:08 crc kubenswrapper[4904]: I0223 10:21:08.379037 4904 generic.go:334] "Generic (PLEG): container finished" podID="ad620ec8-50ba-4415-a0fa-c175246707b8" containerID="688b96e014994a7914657789bff954d111945d35bcd909f812a07117694660f2" exitCode=0 Feb 23 10:21:08 crc kubenswrapper[4904]: I0223 10:21:08.379134 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkmnj" event={"ID":"ad620ec8-50ba-4415-a0fa-c175246707b8","Type":"ContainerDied","Data":"688b96e014994a7914657789bff954d111945d35bcd909f812a07117694660f2"} Feb 23 10:21:09 crc kubenswrapper[4904]: I0223 10:21:09.389759 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkmnj" event={"ID":"ad620ec8-50ba-4415-a0fa-c175246707b8","Type":"ContainerStarted","Data":"4f8988979d2fe7f612331eccaee9ee78ee317310f35c9e6857b22452d9d3625e"} Feb 23 10:21:09 crc kubenswrapper[4904]: I0223 10:21:09.418903 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bkmnj" podStartSLOduration=2.962853237 podStartE2EDuration="6.418878198s" podCreationTimestamp="2026-02-23 10:21:03 +0000 UTC" firstStartedPulling="2026-02-23 10:21:05.352864795 +0000 UTC m=+898.773238308" lastFinishedPulling="2026-02-23 10:21:08.808889756 +0000 UTC m=+902.229263269" observedRunningTime="2026-02-23 10:21:09.411137699 +0000 UTC m=+902.831511212" watchObservedRunningTime="2026-02-23 10:21:09.418878198 +0000 UTC m=+902.839251711" Feb 23 10:21:11 crc kubenswrapper[4904]: I0223 10:21:11.465283 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vz8pr"] Feb 23 10:21:11 crc kubenswrapper[4904]: I0223 10:21:11.467513 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vz8pr" Feb 23 10:21:11 crc kubenswrapper[4904]: I0223 10:21:11.489415 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vz8pr"] Feb 23 10:21:11 crc kubenswrapper[4904]: I0223 10:21:11.527075 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2bd97fa-e6d7-493c-98b4-2aae4889065e-catalog-content\") pod \"community-operators-vz8pr\" (UID: \"d2bd97fa-e6d7-493c-98b4-2aae4889065e\") " pod="openshift-marketplace/community-operators-vz8pr" Feb 23 10:21:11 crc kubenswrapper[4904]: I0223 10:21:11.527169 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2bd97fa-e6d7-493c-98b4-2aae4889065e-utilities\") pod \"community-operators-vz8pr\" (UID: \"d2bd97fa-e6d7-493c-98b4-2aae4889065e\") " pod="openshift-marketplace/community-operators-vz8pr" Feb 23 10:21:11 crc kubenswrapper[4904]: I0223 10:21:11.527280 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lxsv\" (UniqueName: \"kubernetes.io/projected/d2bd97fa-e6d7-493c-98b4-2aae4889065e-kube-api-access-2lxsv\") pod \"community-operators-vz8pr\" (UID: \"d2bd97fa-e6d7-493c-98b4-2aae4889065e\") " pod="openshift-marketplace/community-operators-vz8pr" Feb 23 10:21:11 crc kubenswrapper[4904]: I0223 10:21:11.628550 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lxsv\" (UniqueName: \"kubernetes.io/projected/d2bd97fa-e6d7-493c-98b4-2aae4889065e-kube-api-access-2lxsv\") pod \"community-operators-vz8pr\" (UID: \"d2bd97fa-e6d7-493c-98b4-2aae4889065e\") " pod="openshift-marketplace/community-operators-vz8pr" Feb 23 10:21:11 crc kubenswrapper[4904]: I0223 10:21:11.628614 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2bd97fa-e6d7-493c-98b4-2aae4889065e-catalog-content\") pod \"community-operators-vz8pr\" (UID: \"d2bd97fa-e6d7-493c-98b4-2aae4889065e\") " pod="openshift-marketplace/community-operators-vz8pr" Feb 23 10:21:11 crc kubenswrapper[4904]: I0223 10:21:11.628639 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2bd97fa-e6d7-493c-98b4-2aae4889065e-utilities\") pod \"community-operators-vz8pr\" (UID: \"d2bd97fa-e6d7-493c-98b4-2aae4889065e\") " pod="openshift-marketplace/community-operators-vz8pr" Feb 23 10:21:11 crc kubenswrapper[4904]: I0223 10:21:11.629115 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2bd97fa-e6d7-493c-98b4-2aae4889065e-utilities\") pod \"community-operators-vz8pr\" (UID: \"d2bd97fa-e6d7-493c-98b4-2aae4889065e\") " pod="openshift-marketplace/community-operators-vz8pr" Feb 23 10:21:11 crc kubenswrapper[4904]: I0223 10:21:11.629227 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2bd97fa-e6d7-493c-98b4-2aae4889065e-catalog-content\") pod \"community-operators-vz8pr\" (UID: \"d2bd97fa-e6d7-493c-98b4-2aae4889065e\") " pod="openshift-marketplace/community-operators-vz8pr" Feb 23 10:21:11 crc kubenswrapper[4904]: I0223 10:21:11.649287 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lxsv\" (UniqueName: \"kubernetes.io/projected/d2bd97fa-e6d7-493c-98b4-2aae4889065e-kube-api-access-2lxsv\") pod \"community-operators-vz8pr\" (UID: \"d2bd97fa-e6d7-493c-98b4-2aae4889065e\") " pod="openshift-marketplace/community-operators-vz8pr" Feb 23 10:21:11 crc kubenswrapper[4904]: I0223 10:21:11.785182 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vz8pr" Feb 23 10:21:12 crc kubenswrapper[4904]: I0223 10:21:12.414467 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vz8pr"] Feb 23 10:21:13 crc kubenswrapper[4904]: I0223 10:21:13.418692 4904 generic.go:334] "Generic (PLEG): container finished" podID="d2bd97fa-e6d7-493c-98b4-2aae4889065e" containerID="e6e19012a9300309f9cd0ff43b347425be6f8c97a04d81f62062499bb789ba27" exitCode=0 Feb 23 10:21:13 crc kubenswrapper[4904]: I0223 10:21:13.418781 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vz8pr" event={"ID":"d2bd97fa-e6d7-493c-98b4-2aae4889065e","Type":"ContainerDied","Data":"e6e19012a9300309f9cd0ff43b347425be6f8c97a04d81f62062499bb789ba27"} Feb 23 10:21:13 crc kubenswrapper[4904]: I0223 10:21:13.419236 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vz8pr" event={"ID":"d2bd97fa-e6d7-493c-98b4-2aae4889065e","Type":"ContainerStarted","Data":"545b16a1888c31fb232acb2ea600a21be6d261cd4734c09cfe2f3473878e9818"} Feb 23 10:21:13 crc kubenswrapper[4904]: I0223 10:21:13.421949 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 10:21:13 crc kubenswrapper[4904]: I0223 10:21:13.630458 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bkmnj" Feb 23 10:21:13 crc kubenswrapper[4904]: I0223 10:21:13.631073 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bkmnj" Feb 23 10:21:14 crc kubenswrapper[4904]: I0223 10:21:14.429197 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vz8pr" event={"ID":"d2bd97fa-e6d7-493c-98b4-2aae4889065e","Type":"ContainerStarted","Data":"cdfc9f1aa47f54bbbffd436b37ed4d67fe8284bf7c5a1082b62cfc9c26fb9ab2"} Feb 23 10:21:14 crc kubenswrapper[4904]: I0223 10:21:14.678467 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bkmnj" podUID="ad620ec8-50ba-4415-a0fa-c175246707b8" containerName="registry-server" probeResult="failure" output=< Feb 23 10:21:14 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Feb 23 10:21:14 crc kubenswrapper[4904]: > Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.075422 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n6zjc"] Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.077288 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n6zjc" Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.089112 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n6zjc"] Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.181754 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/596e43b6-0031-4018-bce2-420a012e6458-utilities\") pod \"certified-operators-n6zjc\" (UID: \"596e43b6-0031-4018-bce2-420a012e6458\") " pod="openshift-marketplace/certified-operators-n6zjc" Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.181838 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg8zt\" (UniqueName: \"kubernetes.io/projected/596e43b6-0031-4018-bce2-420a012e6458-kube-api-access-pg8zt\") pod \"certified-operators-n6zjc\" (UID: \"596e43b6-0031-4018-bce2-420a012e6458\") " pod="openshift-marketplace/certified-operators-n6zjc" Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.181975 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/596e43b6-0031-4018-bce2-420a012e6458-catalog-content\") pod \"certified-operators-n6zjc\" (UID: \"596e43b6-0031-4018-bce2-420a012e6458\") " pod="openshift-marketplace/certified-operators-n6zjc" Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.283911 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/596e43b6-0031-4018-bce2-420a012e6458-utilities\") pod \"certified-operators-n6zjc\" (UID: \"596e43b6-0031-4018-bce2-420a012e6458\") " pod="openshift-marketplace/certified-operators-n6zjc" Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.283989 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg8zt\" (UniqueName: \"kubernetes.io/projected/596e43b6-0031-4018-bce2-420a012e6458-kube-api-access-pg8zt\") pod \"certified-operators-n6zjc\" (UID: \"596e43b6-0031-4018-bce2-420a012e6458\") " pod="openshift-marketplace/certified-operators-n6zjc" Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.284035 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/596e43b6-0031-4018-bce2-420a012e6458-catalog-content\") pod \"certified-operators-n6zjc\" (UID: \"596e43b6-0031-4018-bce2-420a012e6458\") " pod="openshift-marketplace/certified-operators-n6zjc" Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.284812 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/596e43b6-0031-4018-bce2-420a012e6458-utilities\") pod \"certified-operators-n6zjc\" (UID: \"596e43b6-0031-4018-bce2-420a012e6458\") " pod="openshift-marketplace/certified-operators-n6zjc" Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.284829 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/596e43b6-0031-4018-bce2-420a012e6458-catalog-content\") pod \"certified-operators-n6zjc\" (UID: \"596e43b6-0031-4018-bce2-420a012e6458\") " pod="openshift-marketplace/certified-operators-n6zjc" Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.305413 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg8zt\" (UniqueName: \"kubernetes.io/projected/596e43b6-0031-4018-bce2-420a012e6458-kube-api-access-pg8zt\") pod \"certified-operators-n6zjc\" (UID: \"596e43b6-0031-4018-bce2-420a012e6458\") " pod="openshift-marketplace/certified-operators-n6zjc" Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.391604 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n6zjc" Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.455872 4904 generic.go:334] "Generic (PLEG): container finished" podID="d2bd97fa-e6d7-493c-98b4-2aae4889065e" containerID="cdfc9f1aa47f54bbbffd436b37ed4d67fe8284bf7c5a1082b62cfc9c26fb9ab2" exitCode=0 Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.455963 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vz8pr" event={"ID":"d2bd97fa-e6d7-493c-98b4-2aae4889065e","Type":"ContainerDied","Data":"cdfc9f1aa47f54bbbffd436b37ed4d67fe8284bf7c5a1082b62cfc9c26fb9ab2"} Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.824083 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n6zjc"] Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.895266 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-5btm2"] Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.896465 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-5btm2" Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.900017 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57746b5ff9-9gdtf"] Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.900921 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-9gdtf" Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.904365 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-qskh9" Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.908555 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-r9s5l" Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.917406 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-5btm2"] Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.922681 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57746b5ff9-9gdtf"] Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.949134 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-6qtcx"] Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.949969 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-6qtcx" Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.959607 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-f6z2z" Feb 23 10:21:15 crc kubenswrapper[4904]: I0223 10:21:15.995534 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-6qtcx"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.000999 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlb9w\" (UniqueName: \"kubernetes.io/projected/13a8ec0f-4892-4d72-947d-e87ab49b3262-kube-api-access-vlb9w\") pod \"cinder-operator-controller-manager-57746b5ff9-9gdtf\" (UID: \"13a8ec0f-4892-4d72-947d-e87ab49b3262\") " pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-9gdtf" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.001069 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtvg4\" (UniqueName: \"kubernetes.io/projected/655fcd29-393f-400c-99a7-01cd2f54f6e8-kube-api-access-xtvg4\") pod \"barbican-operator-controller-manager-c4b7d6946-5btm2\" (UID: \"655fcd29-393f-400c-99a7-01cd2f54f6e8\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-5btm2" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.001089 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjll7\" (UniqueName: \"kubernetes.io/projected/8101934b-1cbc-45c6-9f81-d9da4c586b55-kube-api-access-mjll7\") pod \"designate-operator-controller-manager-55cc45767f-6qtcx\" (UID: \"8101934b-1cbc-45c6-9f81-d9da4c586b55\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-6qtcx" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.020434 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68c6d499cb-hwwbw"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.021459 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-hwwbw" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.026013 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-d74zk" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.032767 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-7tpwb"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.034143 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-9595d6797-7tpwb" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.048076 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-7tpwb"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.050377 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68c6d499cb-hwwbw"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.050578 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-jvnbf" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.063333 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-wdw2p"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.064145 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-wdw2p" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.067287 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-mmkt6" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.072006 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-wdw2p"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.103343 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-695pq\" (UniqueName: \"kubernetes.io/projected/2be0ca02-8806-415f-addc-9cd1765721dc-kube-api-access-695pq\") pod \"heat-operator-controller-manager-9595d6797-7tpwb\" (UID: \"2be0ca02-8806-415f-addc-9cd1765721dc\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-7tpwb" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.103790 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtvg4\" (UniqueName: \"kubernetes.io/projected/655fcd29-393f-400c-99a7-01cd2f54f6e8-kube-api-access-xtvg4\") pod \"barbican-operator-controller-manager-c4b7d6946-5btm2\" (UID: \"655fcd29-393f-400c-99a7-01cd2f54f6e8\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-5btm2" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.103944 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjll7\" (UniqueName: \"kubernetes.io/projected/8101934b-1cbc-45c6-9f81-d9da4c586b55-kube-api-access-mjll7\") pod \"designate-operator-controller-manager-55cc45767f-6qtcx\" (UID: \"8101934b-1cbc-45c6-9f81-d9da4c586b55\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-6qtcx" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.104172 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws97n\" (UniqueName: \"kubernetes.io/projected/b76d29c0-207f-45c3-a983-6496fd95588e-kube-api-access-ws97n\") pod \"horizon-operator-controller-manager-54fb488b88-wdw2p\" (UID: \"b76d29c0-207f-45c3-a983-6496fd95588e\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-wdw2p" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.104381 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j74vj\" (UniqueName: \"kubernetes.io/projected/e76c5a19-592e-4739-b437-28157ab7d3d5-kube-api-access-j74vj\") pod \"glance-operator-controller-manager-68c6d499cb-hwwbw\" (UID: \"e76c5a19-592e-4739-b437-28157ab7d3d5\") " pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-hwwbw" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.104605 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlb9w\" (UniqueName: \"kubernetes.io/projected/13a8ec0f-4892-4d72-947d-e87ab49b3262-kube-api-access-vlb9w\") pod \"cinder-operator-controller-manager-57746b5ff9-9gdtf\" (UID: \"13a8ec0f-4892-4d72-947d-e87ab49b3262\") " pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-9gdtf" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.106198 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-5ghdl"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.112401 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5ghdl" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.115245 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-v28nc" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.121112 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.121621 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-dk89f"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.123000 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-dk89f" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.130652 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-nq2t6" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.138567 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-dk89f"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.138811 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjll7\" (UniqueName: \"kubernetes.io/projected/8101934b-1cbc-45c6-9f81-d9da4c586b55-kube-api-access-mjll7\") pod \"designate-operator-controller-manager-55cc45767f-6qtcx\" (UID: \"8101934b-1cbc-45c6-9f81-d9da4c586b55\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-6qtcx" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.144789 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-5ghdl"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.146153 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlb9w\" (UniqueName: \"kubernetes.io/projected/13a8ec0f-4892-4d72-947d-e87ab49b3262-kube-api-access-vlb9w\") pod \"cinder-operator-controller-manager-57746b5ff9-9gdtf\" (UID: \"13a8ec0f-4892-4d72-947d-e87ab49b3262\") " pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-9gdtf" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.149098 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtvg4\" (UniqueName: \"kubernetes.io/projected/655fcd29-393f-400c-99a7-01cd2f54f6e8-kube-api-access-xtvg4\") pod \"barbican-operator-controller-manager-c4b7d6946-5btm2\" (UID: \"655fcd29-393f-400c-99a7-01cd2f54f6e8\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-5btm2" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.178858 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-b8bjv"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.179966 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-b8bjv" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.186619 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-rwjx4" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.206929 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws97n\" (UniqueName: \"kubernetes.io/projected/b76d29c0-207f-45c3-a983-6496fd95588e-kube-api-access-ws97n\") pod \"horizon-operator-controller-manager-54fb488b88-wdw2p\" (UID: \"b76d29c0-207f-45c3-a983-6496fd95588e\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-wdw2p" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.207852 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j74vj\" (UniqueName: \"kubernetes.io/projected/e76c5a19-592e-4739-b437-28157ab7d3d5-kube-api-access-j74vj\") pod \"glance-operator-controller-manager-68c6d499cb-hwwbw\" (UID: \"e76c5a19-592e-4739-b437-28157ab7d3d5\") " pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-hwwbw" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.208081 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86643e54-73df-41f4-a567-6631562e465b-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5ghdl\" (UID: \"86643e54-73df-41f4-a567-6631562e465b\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5ghdl" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.208247 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p6hh\" (UniqueName: \"kubernetes.io/projected/86643e54-73df-41f4-a567-6631562e465b-kube-api-access-4p6hh\") pod \"infra-operator-controller-manager-66d6b5f488-5ghdl\" (UID: \"86643e54-73df-41f4-a567-6631562e465b\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5ghdl" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.208585 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-695pq\" (UniqueName: \"kubernetes.io/projected/2be0ca02-8806-415f-addc-9cd1765721dc-kube-api-access-695pq\") pod \"heat-operator-controller-manager-9595d6797-7tpwb\" (UID: \"2be0ca02-8806-415f-addc-9cd1765721dc\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-7tpwb" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.208845 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g85v\" (UniqueName: \"kubernetes.io/projected/47a11eed-a07a-47f8-9b13-2fd4d7610c65-kube-api-access-9g85v\") pod \"ironic-operator-controller-manager-6494cdbf8f-dk89f\" (UID: \"47a11eed-a07a-47f8-9b13-2fd4d7610c65\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-dk89f" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.215765 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-96fff9cb8-f6gqt"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.216690 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-f6gqt" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.225140 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rk4nw" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.226298 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-b8bjv"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.233789 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-5btm2" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.249394 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j74vj\" (UniqueName: \"kubernetes.io/projected/e76c5a19-592e-4739-b437-28157ab7d3d5-kube-api-access-j74vj\") pod \"glance-operator-controller-manager-68c6d499cb-hwwbw\" (UID: \"e76c5a19-592e-4739-b437-28157ab7d3d5\") " pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-hwwbw" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.256024 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-695pq\" (UniqueName: \"kubernetes.io/projected/2be0ca02-8806-415f-addc-9cd1765721dc-kube-api-access-695pq\") pod \"heat-operator-controller-manager-9595d6797-7tpwb\" (UID: \"2be0ca02-8806-415f-addc-9cd1765721dc\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-7tpwb" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.268282 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-9gdtf" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.275756 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws97n\" (UniqueName: \"kubernetes.io/projected/b76d29c0-207f-45c3-a983-6496fd95588e-kube-api-access-ws97n\") pod \"horizon-operator-controller-manager-54fb488b88-wdw2p\" (UID: \"b76d29c0-207f-45c3-a983-6496fd95588e\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-wdw2p" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.301771 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-6qtcx" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.314168 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf7c6\" (UniqueName: \"kubernetes.io/projected/d56c4104-6cd0-4d5f-b63e-1be797de40d8-kube-api-access-sf7c6\") pod \"keystone-operator-controller-manager-6c78d668d5-b8bjv\" (UID: \"d56c4104-6cd0-4d5f-b63e-1be797de40d8\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-b8bjv" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.314609 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g85v\" (UniqueName: \"kubernetes.io/projected/47a11eed-a07a-47f8-9b13-2fd4d7610c65-kube-api-access-9g85v\") pod \"ironic-operator-controller-manager-6494cdbf8f-dk89f\" (UID: \"47a11eed-a07a-47f8-9b13-2fd4d7610c65\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-dk89f" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.314811 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86643e54-73df-41f4-a567-6631562e465b-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5ghdl\" (UID: \"86643e54-73df-41f4-a567-6631562e465b\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5ghdl" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.314933 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p6hh\" (UniqueName: \"kubernetes.io/projected/86643e54-73df-41f4-a567-6631562e465b-kube-api-access-4p6hh\") pod \"infra-operator-controller-manager-66d6b5f488-5ghdl\" (UID: \"86643e54-73df-41f4-a567-6631562e465b\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5ghdl" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.315116 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj6c6\" (UniqueName: \"kubernetes.io/projected/87c2d044-3f5b-4a7c-80a9-f70c00310af9-kube-api-access-nj6c6\") pod \"manila-operator-controller-manager-96fff9cb8-f6gqt\" (UID: \"87c2d044-3f5b-4a7c-80a9-f70c00310af9\") " pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-f6gqt" Feb 23 10:21:16 crc kubenswrapper[4904]: E0223 10:21:16.316475 4904 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 10:21:16 crc kubenswrapper[4904]: E0223 10:21:16.316539 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86643e54-73df-41f4-a567-6631562e465b-cert podName:86643e54-73df-41f4-a567-6631562e465b nodeName:}" failed. No retries permitted until 2026-02-23 10:21:16.816521561 +0000 UTC m=+910.236895074 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86643e54-73df-41f4-a567-6631562e465b-cert") pod "infra-operator-controller-manager-66d6b5f488-5ghdl" (UID: "86643e54-73df-41f4-a567-6631562e465b") : secret "infra-operator-webhook-server-cert" not found Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.297800 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-hf8p4"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.344732 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-96fff9cb8-f6gqt"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.344834 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-hf8p4" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.349362 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-p9gc6" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.380510 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p6hh\" (UniqueName: \"kubernetes.io/projected/86643e54-73df-41f4-a567-6631562e465b-kube-api-access-4p6hh\") pod \"infra-operator-controller-manager-66d6b5f488-5ghdl\" (UID: \"86643e54-73df-41f4-a567-6631562e465b\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5ghdl" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.382404 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-h6d6n"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.410606 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g85v\" (UniqueName: \"kubernetes.io/projected/47a11eed-a07a-47f8-9b13-2fd4d7610c65-kube-api-access-9g85v\") pod \"ironic-operator-controller-manager-6494cdbf8f-dk89f\" (UID: \"47a11eed-a07a-47f8-9b13-2fd4d7610c65\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-dk89f" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.438032 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-h6d6n" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.442093 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-hf8p4"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.442968 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-hwwbw" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.449693 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj6c6\" (UniqueName: \"kubernetes.io/projected/87c2d044-3f5b-4a7c-80a9-f70c00310af9-kube-api-access-nj6c6\") pod \"manila-operator-controller-manager-96fff9cb8-f6gqt\" (UID: \"87c2d044-3f5b-4a7c-80a9-f70c00310af9\") " pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-f6gqt" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.459392 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-kq2j5" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.466037 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf7c6\" (UniqueName: \"kubernetes.io/projected/d56c4104-6cd0-4d5f-b63e-1be797de40d8-kube-api-access-sf7c6\") pod \"keystone-operator-controller-manager-6c78d668d5-b8bjv\" (UID: \"d56c4104-6cd0-4d5f-b63e-1be797de40d8\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-b8bjv" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.485376 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-9595d6797-7tpwb" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.510632 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj6c6\" (UniqueName: \"kubernetes.io/projected/87c2d044-3f5b-4a7c-80a9-f70c00310af9-kube-api-access-nj6c6\") pod \"manila-operator-controller-manager-96fff9cb8-f6gqt\" (UID: \"87c2d044-3f5b-4a7c-80a9-f70c00310af9\") " pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-f6gqt" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.519585 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-wdw2p" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.529048 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-h6d6n"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.549884 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf7c6\" (UniqueName: \"kubernetes.io/projected/d56c4104-6cd0-4d5f-b63e-1be797de40d8-kube-api-access-sf7c6\") pod \"keystone-operator-controller-manager-6c78d668d5-b8bjv\" (UID: \"d56c4104-6cd0-4d5f-b63e-1be797de40d8\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-b8bjv" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.558282 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-d6v74"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.562069 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-d6v74" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.567569 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh9nm\" (UniqueName: \"kubernetes.io/projected/82b3f66d-f7fd-4949-bb69-a8203973ce95-kube-api-access-fh9nm\") pod \"mariadb-operator-controller-manager-66997756f6-hf8p4\" (UID: \"82b3f66d-f7fd-4949-bb69-a8203973ce95\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-hf8p4" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.567635 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nbmv\" (UniqueName: \"kubernetes.io/projected/1944eba9-ffe2-467e-8ee8-ae7cf23aa1c4-kube-api-access-4nbmv\") pod \"neutron-operator-controller-manager-54967dbbdf-h6d6n\" (UID: \"1944eba9-ffe2-467e-8ee8-ae7cf23aa1c4\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-h6d6n" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.568414 4904 generic.go:334] "Generic (PLEG): container finished" podID="596e43b6-0031-4018-bce2-420a012e6458" containerID="935f5c598625587663729d319df491c8bf3a75e444eaf09aa0a8a7a28242eb22" exitCode=0 Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.568473 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n6zjc" event={"ID":"596e43b6-0031-4018-bce2-420a012e6458","Type":"ContainerDied","Data":"935f5c598625587663729d319df491c8bf3a75e444eaf09aa0a8a7a28242eb22"} Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.568498 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n6zjc" event={"ID":"596e43b6-0031-4018-bce2-420a012e6458","Type":"ContainerStarted","Data":"b1d41ece9415a0675c870488d9b96f695ce1613ef9e216851e827e92570d24ac"} Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.574314 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-sf9gg" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.574793 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-dk89f" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.585784 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-kxzq8"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.586907 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-kxzq8" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.602740 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-b8bjv" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.608293 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-v68sh" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.615724 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vz8pr" event={"ID":"d2bd97fa-e6d7-493c-98b4-2aae4889065e","Type":"ContainerStarted","Data":"48af04ff7ab66a955255028fbe32cbf0e043446e5a64aef6aa368f2898c63d6c"} Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.627372 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-d6v74"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.639140 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-kxzq8"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.645795 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-s95c7"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.646653 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-s95c7" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.651285 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-skh6h" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.651654 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-wz2gg"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.652451 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-wz2gg" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.656411 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-n967s" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.663559 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-2mds6"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.673510 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfhjk\" (UniqueName: \"kubernetes.io/projected/c46970b9-27d9-4b4e-a470-20df6b3fd44c-kube-api-access-wfhjk\") pod \"placement-operator-controller-manager-57bd55f9b7-wz2gg\" (UID: \"c46970b9-27d9-4b4e-a470-20df6b3fd44c\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-wz2gg" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.673588 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stw47\" (UniqueName: \"kubernetes.io/projected/81c9e2b3-cfc0-400f-9534-f9f6ba8f0482-kube-api-access-stw47\") pod \"ovn-operator-controller-manager-85c99d655-s95c7\" (UID: \"81c9e2b3-cfc0-400f-9534-f9f6ba8f0482\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-s95c7" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.673624 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6sp6\" (UniqueName: \"kubernetes.io/projected/81f9873a-af57-4f93-85cc-df46dbcbcde8-kube-api-access-w6sp6\") pod \"nova-operator-controller-manager-5ddd85db87-d6v74\" (UID: \"81f9873a-af57-4f93-85cc-df46dbcbcde8\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-d6v74" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.673674 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh9nm\" (UniqueName: \"kubernetes.io/projected/82b3f66d-f7fd-4949-bb69-a8203973ce95-kube-api-access-fh9nm\") pod \"mariadb-operator-controller-manager-66997756f6-hf8p4\" (UID: \"82b3f66d-f7fd-4949-bb69-a8203973ce95\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-hf8p4" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.691629 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-wz2gg"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.691817 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-2mds6" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.673710 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nbmv\" (UniqueName: \"kubernetes.io/projected/1944eba9-ffe2-467e-8ee8-ae7cf23aa1c4-kube-api-access-4nbmv\") pod \"neutron-operator-controller-manager-54967dbbdf-h6d6n\" (UID: \"1944eba9-ffe2-467e-8ee8-ae7cf23aa1c4\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-h6d6n" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.693830 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9775q\" (UniqueName: \"kubernetes.io/projected/a28a8d14-5770-426d-b7ba-f2c89f1c5f3f-kube-api-access-9775q\") pod \"octavia-operator-controller-manager-745bbbd77b-kxzq8\" (UID: \"a28a8d14-5770-426d-b7ba-f2c89f1c5f3f\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-kxzq8" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.694278 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-b2ds7" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.742578 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh9nm\" (UniqueName: \"kubernetes.io/projected/82b3f66d-f7fd-4949-bb69-a8203973ce95-kube-api-access-fh9nm\") pod \"mariadb-operator-controller-manager-66997756f6-hf8p4\" (UID: \"82b3f66d-f7fd-4949-bb69-a8203973ce95\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-hf8p4" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.743134 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.747948 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.761461 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.761739 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-2bx96" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.763032 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-s95c7"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.763307 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-f6gqt" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.777608 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nbmv\" (UniqueName: \"kubernetes.io/projected/1944eba9-ffe2-467e-8ee8-ae7cf23aa1c4-kube-api-access-4nbmv\") pod \"neutron-operator-controller-manager-54967dbbdf-h6d6n\" (UID: \"1944eba9-ffe2-467e-8ee8-ae7cf23aa1c4\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-h6d6n" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.779552 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-2mds6"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.795841 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9775q\" (UniqueName: \"kubernetes.io/projected/a28a8d14-5770-426d-b7ba-f2c89f1c5f3f-kube-api-access-9775q\") pod \"octavia-operator-controller-manager-745bbbd77b-kxzq8\" (UID: \"a28a8d14-5770-426d-b7ba-f2c89f1c5f3f\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-kxzq8" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.795906 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfhjk\" (UniqueName: \"kubernetes.io/projected/c46970b9-27d9-4b4e-a470-20df6b3fd44c-kube-api-access-wfhjk\") pod \"placement-operator-controller-manager-57bd55f9b7-wz2gg\" (UID: \"c46970b9-27d9-4b4e-a470-20df6b3fd44c\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-wz2gg" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.795982 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n654m\" (UniqueName: \"kubernetes.io/projected/4ae6c04a-de30-4cba-8b66-740d209955b8-kube-api-access-n654m\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr\" (UID: \"4ae6c04a-de30-4cba-8b66-740d209955b8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.796010 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stw47\" (UniqueName: \"kubernetes.io/projected/81c9e2b3-cfc0-400f-9534-f9f6ba8f0482-kube-api-access-stw47\") pod \"ovn-operator-controller-manager-85c99d655-s95c7\" (UID: \"81c9e2b3-cfc0-400f-9534-f9f6ba8f0482\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-s95c7" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.796034 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxcl2\" (UniqueName: \"kubernetes.io/projected/a2e7a886-a671-44c1-909d-90224369a5e2-kube-api-access-mxcl2\") pod \"swift-operator-controller-manager-79558bbfbf-2mds6\" (UID: \"a2e7a886-a671-44c1-909d-90224369a5e2\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-2mds6" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.796080 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6sp6\" (UniqueName: \"kubernetes.io/projected/81f9873a-af57-4f93-85cc-df46dbcbcde8-kube-api-access-w6sp6\") pod \"nova-operator-controller-manager-5ddd85db87-d6v74\" (UID: \"81f9873a-af57-4f93-85cc-df46dbcbcde8\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-d6v74" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.796128 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ae6c04a-de30-4cba-8b66-740d209955b8-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr\" (UID: \"4ae6c04a-de30-4cba-8b66-740d209955b8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.820288 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.838802 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfhjk\" (UniqueName: \"kubernetes.io/projected/c46970b9-27d9-4b4e-a470-20df6b3fd44c-kube-api-access-wfhjk\") pod \"placement-operator-controller-manager-57bd55f9b7-wz2gg\" (UID: \"c46970b9-27d9-4b4e-a470-20df6b3fd44c\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-wz2gg" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.841590 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9775q\" (UniqueName: \"kubernetes.io/projected/a28a8d14-5770-426d-b7ba-f2c89f1c5f3f-kube-api-access-9775q\") pod \"octavia-operator-controller-manager-745bbbd77b-kxzq8\" (UID: \"a28a8d14-5770-426d-b7ba-f2c89f1c5f3f\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-kxzq8" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.842171 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-x55xc"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.853052 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-hf8p4" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.860061 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-x55xc" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.869258 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6sp6\" (UniqueName: \"kubernetes.io/projected/81f9873a-af57-4f93-85cc-df46dbcbcde8-kube-api-access-w6sp6\") pod \"nova-operator-controller-manager-5ddd85db87-d6v74\" (UID: \"81f9873a-af57-4f93-85cc-df46dbcbcde8\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-d6v74" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.870063 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-ft99c" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.876724 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-x55xc"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.879681 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-h6d6n" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.881011 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stw47\" (UniqueName: \"kubernetes.io/projected/81c9e2b3-cfc0-400f-9534-f9f6ba8f0482-kube-api-access-stw47\") pod \"ovn-operator-controller-manager-85c99d655-s95c7\" (UID: \"81c9e2b3-cfc0-400f-9534-f9f6ba8f0482\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-s95c7" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.898821 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n654m\" (UniqueName: \"kubernetes.io/projected/4ae6c04a-de30-4cba-8b66-740d209955b8-kube-api-access-n654m\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr\" (UID: \"4ae6c04a-de30-4cba-8b66-740d209955b8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.898861 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxcl2\" (UniqueName: \"kubernetes.io/projected/a2e7a886-a671-44c1-909d-90224369a5e2-kube-api-access-mxcl2\") pod \"swift-operator-controller-manager-79558bbfbf-2mds6\" (UID: \"a2e7a886-a671-44c1-909d-90224369a5e2\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-2mds6" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.898888 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86643e54-73df-41f4-a567-6631562e465b-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5ghdl\" (UID: \"86643e54-73df-41f4-a567-6631562e465b\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5ghdl" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.898946 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ae6c04a-de30-4cba-8b66-740d209955b8-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr\" (UID: \"4ae6c04a-de30-4cba-8b66-740d209955b8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.899043 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27s6x\" (UniqueName: \"kubernetes.io/projected/27e5aeb0-8732-493a-9ec6-ebb846416db9-kube-api-access-27s6x\") pod \"telemetry-operator-controller-manager-56dc67d744-x55xc\" (UID: \"27e5aeb0-8732-493a-9ec6-ebb846416db9\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-x55xc" Feb 23 10:21:16 crc kubenswrapper[4904]: E0223 10:21:16.899705 4904 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 10:21:16 crc kubenswrapper[4904]: E0223 10:21:16.899758 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86643e54-73df-41f4-a567-6631562e465b-cert podName:86643e54-73df-41f4-a567-6631562e465b nodeName:}" failed. No retries permitted until 2026-02-23 10:21:17.899746125 +0000 UTC m=+911.320119638 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86643e54-73df-41f4-a567-6631562e465b-cert") pod "infra-operator-controller-manager-66d6b5f488-5ghdl" (UID: "86643e54-73df-41f4-a567-6631562e465b") : secret "infra-operator-webhook-server-cert" not found Feb 23 10:21:16 crc kubenswrapper[4904]: E0223 10:21:16.900006 4904 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 10:21:16 crc kubenswrapper[4904]: E0223 10:21:16.900030 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ae6c04a-de30-4cba-8b66-740d209955b8-cert podName:4ae6c04a-de30-4cba-8b66-740d209955b8 nodeName:}" failed. No retries permitted until 2026-02-23 10:21:17.400022763 +0000 UTC m=+910.820396276 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4ae6c04a-de30-4cba-8b66-740d209955b8-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" (UID: "4ae6c04a-de30-4cba-8b66-740d209955b8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.912438 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-58wnl"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.914645 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-58wnl" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.917002 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-d6v74" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.918569 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-r8t5j" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.953923 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-kxzq8" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.959243 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-58wnl"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.964347 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n654m\" (UniqueName: \"kubernetes.io/projected/4ae6c04a-de30-4cba-8b66-740d209955b8-kube-api-access-n654m\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr\" (UID: \"4ae6c04a-de30-4cba-8b66-740d209955b8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.967408 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-ccb96f8ff-gxgct"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.969027 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxcl2\" (UniqueName: \"kubernetes.io/projected/a2e7a886-a671-44c1-909d-90224369a5e2-kube-api-access-mxcl2\") pod \"swift-operator-controller-manager-79558bbfbf-2mds6\" (UID: \"a2e7a886-a671-44c1-909d-90224369a5e2\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-2mds6" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.971569 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-ccb96f8ff-gxgct" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.978117 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-7dzzc" Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.978318 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-ccb96f8ff-gxgct"] Feb 23 10:21:16 crc kubenswrapper[4904]: I0223 10:21:16.992578 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-s95c7" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.000172 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw5sk\" (UniqueName: \"kubernetes.io/projected/d5c4df32-1ca8-4497-8ccd-30b31f71f364-kube-api-access-vw5sk\") pod \"watcher-operator-controller-manager-ccb96f8ff-gxgct\" (UID: \"d5c4df32-1ca8-4497-8ccd-30b31f71f364\") " pod="openstack-operators/watcher-operator-controller-manager-ccb96f8ff-gxgct" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.000230 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27s6x\" (UniqueName: \"kubernetes.io/projected/27e5aeb0-8732-493a-9ec6-ebb846416db9-kube-api-access-27s6x\") pod \"telemetry-operator-controller-manager-56dc67d744-x55xc\" (UID: \"27e5aeb0-8732-493a-9ec6-ebb846416db9\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-x55xc" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.000268 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrlsd\" (UniqueName: \"kubernetes.io/projected/58c024f9-9e55-4e30-9dbc-8ba460e4b91d-kube-api-access-wrlsd\") pod \"test-operator-controller-manager-8467ccb4c8-58wnl\" (UID: \"58c024f9-9e55-4e30-9dbc-8ba460e4b91d\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-58wnl" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.028670 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz"] Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.029584 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.031946 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.032506 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-x4xl5" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.034316 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vz8pr" podStartSLOduration=3.524850892 podStartE2EDuration="6.034304682s" podCreationTimestamp="2026-02-23 10:21:11 +0000 UTC" firstStartedPulling="2026-02-23 10:21:13.421570366 +0000 UTC m=+906.841943869" lastFinishedPulling="2026-02-23 10:21:15.931024156 +0000 UTC m=+909.351397659" observedRunningTime="2026-02-23 10:21:16.733930761 +0000 UTC m=+910.154304294" watchObservedRunningTime="2026-02-23 10:21:17.034304682 +0000 UTC m=+910.454678195" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.035172 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27s6x\" (UniqueName: \"kubernetes.io/projected/27e5aeb0-8732-493a-9ec6-ebb846416db9-kube-api-access-27s6x\") pod \"telemetry-operator-controller-manager-56dc67d744-x55xc\" (UID: \"27e5aeb0-8732-493a-9ec6-ebb846416db9\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-x55xc" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.048013 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.087418 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-wz2gg" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.089846 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz"] Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.102095 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw5sk\" (UniqueName: \"kubernetes.io/projected/d5c4df32-1ca8-4497-8ccd-30b31f71f364-kube-api-access-vw5sk\") pod \"watcher-operator-controller-manager-ccb96f8ff-gxgct\" (UID: \"d5c4df32-1ca8-4497-8ccd-30b31f71f364\") " pod="openstack-operators/watcher-operator-controller-manager-ccb96f8ff-gxgct" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.102208 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-webhook-certs\") pod \"openstack-operator-controller-manager-dd8cbd9bf-pnztz\" (UID: \"46699645-af00-4370-a5dd-c1c94361be2b\") " pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.102240 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-metrics-certs\") pod \"openstack-operator-controller-manager-dd8cbd9bf-pnztz\" (UID: \"46699645-af00-4370-a5dd-c1c94361be2b\") " pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.102289 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrlsd\" (UniqueName: \"kubernetes.io/projected/58c024f9-9e55-4e30-9dbc-8ba460e4b91d-kube-api-access-wrlsd\") pod \"test-operator-controller-manager-8467ccb4c8-58wnl\" (UID: \"58c024f9-9e55-4e30-9dbc-8ba460e4b91d\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-58wnl" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.102355 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzdw2\" (UniqueName: \"kubernetes.io/projected/46699645-af00-4370-a5dd-c1c94361be2b-kube-api-access-wzdw2\") pod \"openstack-operator-controller-manager-dd8cbd9bf-pnztz\" (UID: \"46699645-af00-4370-a5dd-c1c94361be2b\") " pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.136052 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-2mds6" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.145963 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrlsd\" (UniqueName: \"kubernetes.io/projected/58c024f9-9e55-4e30-9dbc-8ba460e4b91d-kube-api-access-wrlsd\") pod \"test-operator-controller-manager-8467ccb4c8-58wnl\" (UID: \"58c024f9-9e55-4e30-9dbc-8ba460e4b91d\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-58wnl" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.179136 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw5sk\" (UniqueName: \"kubernetes.io/projected/d5c4df32-1ca8-4497-8ccd-30b31f71f364-kube-api-access-vw5sk\") pod \"watcher-operator-controller-manager-ccb96f8ff-gxgct\" (UID: \"d5c4df32-1ca8-4497-8ccd-30b31f71f364\") " pod="openstack-operators/watcher-operator-controller-manager-ccb96f8ff-gxgct" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.224135 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bt9jc"] Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.230763 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bt9jc" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.243643 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-c5jl7" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.244003 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bt9jc"] Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.246553 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-x55xc" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.247299 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-webhook-certs\") pod \"openstack-operator-controller-manager-dd8cbd9bf-pnztz\" (UID: \"46699645-af00-4370-a5dd-c1c94361be2b\") " pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.247386 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-metrics-certs\") pod \"openstack-operator-controller-manager-dd8cbd9bf-pnztz\" (UID: \"46699645-af00-4370-a5dd-c1c94361be2b\") " pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.247629 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzdw2\" (UniqueName: \"kubernetes.io/projected/46699645-af00-4370-a5dd-c1c94361be2b-kube-api-access-wzdw2\") pod \"openstack-operator-controller-manager-dd8cbd9bf-pnztz\" (UID: \"46699645-af00-4370-a5dd-c1c94361be2b\") " pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:21:17 crc kubenswrapper[4904]: E0223 10:21:17.248886 4904 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 10:21:17 crc kubenswrapper[4904]: E0223 10:21:17.248934 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-metrics-certs podName:46699645-af00-4370-a5dd-c1c94361be2b nodeName:}" failed. No retries permitted until 2026-02-23 10:21:17.748917799 +0000 UTC m=+911.169291312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-metrics-certs") pod "openstack-operator-controller-manager-dd8cbd9bf-pnztz" (UID: "46699645-af00-4370-a5dd-c1c94361be2b") : secret "metrics-server-cert" not found Feb 23 10:21:17 crc kubenswrapper[4904]: E0223 10:21:17.248930 4904 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 10:21:17 crc kubenswrapper[4904]: E0223 10:21:17.249040 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-webhook-certs podName:46699645-af00-4370-a5dd-c1c94361be2b nodeName:}" failed. No retries permitted until 2026-02-23 10:21:17.749020322 +0000 UTC m=+911.169393835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-webhook-certs") pod "openstack-operator-controller-manager-dd8cbd9bf-pnztz" (UID: "46699645-af00-4370-a5dd-c1c94361be2b") : secret "webhook-server-cert" not found Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.333404 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-58wnl" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.374183 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzdw2\" (UniqueName: \"kubernetes.io/projected/46699645-af00-4370-a5dd-c1c94361be2b-kube-api-access-wzdw2\") pod \"openstack-operator-controller-manager-dd8cbd9bf-pnztz\" (UID: \"46699645-af00-4370-a5dd-c1c94361be2b\") " pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.426776 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-ccb96f8ff-gxgct" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.467442 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-5btm2"] Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.467499 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57746b5ff9-9gdtf"] Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.467832 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ae6c04a-de30-4cba-8b66-740d209955b8-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr\" (UID: \"4ae6c04a-de30-4cba-8b66-740d209955b8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.467881 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brnvm\" (UniqueName: \"kubernetes.io/projected/98685ce7-3242-4e1e-94f9-bf90399619d3-kube-api-access-brnvm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-bt9jc\" (UID: \"98685ce7-3242-4e1e-94f9-bf90399619d3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bt9jc" Feb 23 10:21:17 crc kubenswrapper[4904]: E0223 10:21:17.468593 4904 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 10:21:17 crc kubenswrapper[4904]: E0223 10:21:17.468642 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ae6c04a-de30-4cba-8b66-740d209955b8-cert podName:4ae6c04a-de30-4cba-8b66-740d209955b8 nodeName:}" failed. No retries permitted until 2026-02-23 10:21:18.468627352 +0000 UTC m=+911.889000865 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4ae6c04a-de30-4cba-8b66-740d209955b8-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" (UID: "4ae6c04a-de30-4cba-8b66-740d209955b8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.569681 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brnvm\" (UniqueName: \"kubernetes.io/projected/98685ce7-3242-4e1e-94f9-bf90399619d3-kube-api-access-brnvm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-bt9jc\" (UID: \"98685ce7-3242-4e1e-94f9-bf90399619d3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bt9jc" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.613852 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brnvm\" (UniqueName: \"kubernetes.io/projected/98685ce7-3242-4e1e-94f9-bf90399619d3-kube-api-access-brnvm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-bt9jc\" (UID: \"98685ce7-3242-4e1e-94f9-bf90399619d3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bt9jc" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.665587 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-9gdtf" event={"ID":"13a8ec0f-4892-4d72-947d-e87ab49b3262","Type":"ContainerStarted","Data":"62c91728bd427ef9e681ad41968c1fc048598059bdf622dbd6588ae3551a7491"} Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.678515 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-5btm2" event={"ID":"655fcd29-393f-400c-99a7-01cd2f54f6e8","Type":"ContainerStarted","Data":"5ff5146c3450bacfce4c905d18c22dffa9b37a26d64dbdc67713c5868ba173a9"} Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.701691 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bt9jc" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.773671 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-webhook-certs\") pod \"openstack-operator-controller-manager-dd8cbd9bf-pnztz\" (UID: \"46699645-af00-4370-a5dd-c1c94361be2b\") " pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.773798 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-metrics-certs\") pod \"openstack-operator-controller-manager-dd8cbd9bf-pnztz\" (UID: \"46699645-af00-4370-a5dd-c1c94361be2b\") " pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:21:17 crc kubenswrapper[4904]: E0223 10:21:17.773887 4904 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 10:21:17 crc kubenswrapper[4904]: E0223 10:21:17.773943 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-webhook-certs podName:46699645-af00-4370-a5dd-c1c94361be2b nodeName:}" failed. No retries permitted until 2026-02-23 10:21:18.773927002 +0000 UTC m=+912.194300515 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-webhook-certs") pod "openstack-operator-controller-manager-dd8cbd9bf-pnztz" (UID: "46699645-af00-4370-a5dd-c1c94361be2b") : secret "webhook-server-cert" not found Feb 23 10:21:17 crc kubenswrapper[4904]: E0223 10:21:17.773943 4904 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 10:21:17 crc kubenswrapper[4904]: E0223 10:21:17.777822 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-metrics-certs podName:46699645-af00-4370-a5dd-c1c94361be2b nodeName:}" failed. No retries permitted until 2026-02-23 10:21:18.777775861 +0000 UTC m=+912.198149374 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-metrics-certs") pod "openstack-operator-controller-manager-dd8cbd9bf-pnztz" (UID: "46699645-af00-4370-a5dd-c1c94361be2b") : secret "metrics-server-cert" not found Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.952162 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-6qtcx"] Feb 23 10:21:17 crc kubenswrapper[4904]: I0223 10:21:17.979016 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86643e54-73df-41f4-a567-6631562e465b-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5ghdl\" (UID: \"86643e54-73df-41f4-a567-6631562e465b\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5ghdl" Feb 23 10:21:17 crc kubenswrapper[4904]: E0223 10:21:17.979325 4904 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 10:21:17 crc kubenswrapper[4904]: E0223 10:21:17.979408 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86643e54-73df-41f4-a567-6631562e465b-cert podName:86643e54-73df-41f4-a567-6631562e465b nodeName:}" failed. No retries permitted until 2026-02-23 10:21:19.97938313 +0000 UTC m=+913.399756643 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86643e54-73df-41f4-a567-6631562e465b-cert") pod "infra-operator-controller-manager-66d6b5f488-5ghdl" (UID: "86643e54-73df-41f4-a567-6631562e465b") : secret "infra-operator-webhook-server-cert" not found Feb 23 10:21:18 crc kubenswrapper[4904]: W0223 10:21:18.097420 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8101934b_1cbc_45c6_9f81_d9da4c586b55.slice/crio-008f8548d9185f2445dff57fc863c5b3a080f9e6f7dda2c7b76033382e1eb088 WatchSource:0}: Error finding container 008f8548d9185f2445dff57fc863c5b3a080f9e6f7dda2c7b76033382e1eb088: Status 404 returned error can't find the container with id 008f8548d9185f2445dff57fc863c5b3a080f9e6f7dda2c7b76033382e1eb088 Feb 23 10:21:18 crc kubenswrapper[4904]: I0223 10:21:18.321431 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-b8bjv"] Feb 23 10:21:18 crc kubenswrapper[4904]: W0223 10:21:18.348416 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82b3f66d_f7fd_4949_bb69_a8203973ce95.slice/crio-d623b28cde8c794d77cdfa58afc64a0b690ff93baea5b2fdcc1a593d00ee4766 WatchSource:0}: Error finding container d623b28cde8c794d77cdfa58afc64a0b690ff93baea5b2fdcc1a593d00ee4766: Status 404 returned error can't find the container with id d623b28cde8c794d77cdfa58afc64a0b690ff93baea5b2fdcc1a593d00ee4766 Feb 23 10:21:18 crc kubenswrapper[4904]: I0223 10:21:18.349444 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-hf8p4"] Feb 23 10:21:18 crc kubenswrapper[4904]: I0223 10:21:18.489491 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ae6c04a-de30-4cba-8b66-740d209955b8-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr\" (UID: \"4ae6c04a-de30-4cba-8b66-740d209955b8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" Feb 23 10:21:18 crc kubenswrapper[4904]: E0223 10:21:18.489660 4904 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 10:21:18 crc kubenswrapper[4904]: E0223 10:21:18.489721 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ae6c04a-de30-4cba-8b66-740d209955b8-cert podName:4ae6c04a-de30-4cba-8b66-740d209955b8 nodeName:}" failed. No retries permitted until 2026-02-23 10:21:20.489693025 +0000 UTC m=+913.910066538 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4ae6c04a-de30-4cba-8b66-740d209955b8-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" (UID: "4ae6c04a-de30-4cba-8b66-740d209955b8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 10:21:18 crc kubenswrapper[4904]: I0223 10:21:18.495307 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-dk89f"] Feb 23 10:21:18 crc kubenswrapper[4904]: I0223 10:21:18.523394 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-7tpwb"] Feb 23 10:21:18 crc kubenswrapper[4904]: I0223 10:21:18.523482 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-96fff9cb8-f6gqt"] Feb 23 10:21:18 crc kubenswrapper[4904]: W0223 10:21:18.538693 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47a11eed_a07a_47f8_9b13_2fd4d7610c65.slice/crio-2048799a811f2c5c99c895c48c67dae1648c12d6758a0c30e704415a912a5675 WatchSource:0}: Error finding container 2048799a811f2c5c99c895c48c67dae1648c12d6758a0c30e704415a912a5675: Status 404 returned error can't find the container with id 2048799a811f2c5c99c895c48c67dae1648c12d6758a0c30e704415a912a5675 Feb 23 10:21:18 crc kubenswrapper[4904]: W0223 10:21:18.557005 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2be0ca02_8806_415f_addc_9cd1765721dc.slice/crio-e16ea7a22d99c25d1e547c8104f56247e664b5a7137581a58b6b35b16f8670f9 WatchSource:0}: Error finding container e16ea7a22d99c25d1e547c8104f56247e664b5a7137581a58b6b35b16f8670f9: Status 404 returned error can't find the container with id e16ea7a22d99c25d1e547c8104f56247e664b5a7137581a58b6b35b16f8670f9 Feb 23 10:21:18 crc kubenswrapper[4904]: W0223 10:21:18.599844 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87c2d044_3f5b_4a7c_80a9_f70c00310af9.slice/crio-09e2da161dfbd416a6e11f0d30bf6f58e6fe93c60814ad7a8506a89a9f10a4bb WatchSource:0}: Error finding container 09e2da161dfbd416a6e11f0d30bf6f58e6fe93c60814ad7a8506a89a9f10a4bb: Status 404 returned error can't find the container with id 09e2da161dfbd416a6e11f0d30bf6f58e6fe93c60814ad7a8506a89a9f10a4bb Feb 23 10:21:18 crc kubenswrapper[4904]: I0223 10:21:18.687666 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-6qtcx" event={"ID":"8101934b-1cbc-45c6-9f81-d9da4c586b55","Type":"ContainerStarted","Data":"008f8548d9185f2445dff57fc863c5b3a080f9e6f7dda2c7b76033382e1eb088"} Feb 23 10:21:18 crc kubenswrapper[4904]: I0223 10:21:18.688971 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-9595d6797-7tpwb" event={"ID":"2be0ca02-8806-415f-addc-9cd1765721dc","Type":"ContainerStarted","Data":"e16ea7a22d99c25d1e547c8104f56247e664b5a7137581a58b6b35b16f8670f9"} Feb 23 10:21:18 crc kubenswrapper[4904]: I0223 10:21:18.689823 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-hf8p4" event={"ID":"82b3f66d-f7fd-4949-bb69-a8203973ce95","Type":"ContainerStarted","Data":"d623b28cde8c794d77cdfa58afc64a0b690ff93baea5b2fdcc1a593d00ee4766"} Feb 23 10:21:18 crc kubenswrapper[4904]: I0223 10:21:18.690850 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-dk89f" event={"ID":"47a11eed-a07a-47f8-9b13-2fd4d7610c65","Type":"ContainerStarted","Data":"2048799a811f2c5c99c895c48c67dae1648c12d6758a0c30e704415a912a5675"} Feb 23 10:21:18 crc kubenswrapper[4904]: I0223 10:21:18.691890 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-f6gqt" event={"ID":"87c2d044-3f5b-4a7c-80a9-f70c00310af9","Type":"ContainerStarted","Data":"09e2da161dfbd416a6e11f0d30bf6f58e6fe93c60814ad7a8506a89a9f10a4bb"} Feb 23 10:21:18 crc kubenswrapper[4904]: I0223 10:21:18.692827 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-b8bjv" event={"ID":"d56c4104-6cd0-4d5f-b63e-1be797de40d8","Type":"ContainerStarted","Data":"1b06aad6db7689e88d405c48489c7ed477a63f13799619083565467213b7f7e6"} Feb 23 10:21:18 crc kubenswrapper[4904]: I0223 10:21:18.794929 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-webhook-certs\") pod \"openstack-operator-controller-manager-dd8cbd9bf-pnztz\" (UID: \"46699645-af00-4370-a5dd-c1c94361be2b\") " pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:21:18 crc kubenswrapper[4904]: I0223 10:21:18.794992 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-metrics-certs\") pod \"openstack-operator-controller-manager-dd8cbd9bf-pnztz\" (UID: \"46699645-af00-4370-a5dd-c1c94361be2b\") " pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:21:18 crc kubenswrapper[4904]: E0223 10:21:18.795263 4904 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 10:21:18 crc kubenswrapper[4904]: E0223 10:21:18.795301 4904 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 10:21:18 crc kubenswrapper[4904]: E0223 10:21:18.795340 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-metrics-certs podName:46699645-af00-4370-a5dd-c1c94361be2b nodeName:}" failed. No retries permitted until 2026-02-23 10:21:20.795318724 +0000 UTC m=+914.215692237 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-metrics-certs") pod "openstack-operator-controller-manager-dd8cbd9bf-pnztz" (UID: "46699645-af00-4370-a5dd-c1c94361be2b") : secret "metrics-server-cert" not found Feb 23 10:21:18 crc kubenswrapper[4904]: E0223 10:21:18.795500 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-webhook-certs podName:46699645-af00-4370-a5dd-c1c94361be2b nodeName:}" failed. No retries permitted until 2026-02-23 10:21:20.795471478 +0000 UTC m=+914.215844991 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-webhook-certs") pod "openstack-operator-controller-manager-dd8cbd9bf-pnztz" (UID: "46699645-af00-4370-a5dd-c1c94361be2b") : secret "webhook-server-cert" not found Feb 23 10:21:18 crc kubenswrapper[4904]: I0223 10:21:18.806836 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-kxzq8"] Feb 23 10:21:18 crc kubenswrapper[4904]: I0223 10:21:18.816182 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-wdw2p"] Feb 23 10:21:18 crc kubenswrapper[4904]: I0223 10:21:18.840498 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-ccb96f8ff-gxgct"] Feb 23 10:21:18 crc kubenswrapper[4904]: W0223 10:21:18.845021 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb76d29c0_207f_45c3_a983_6496fd95588e.slice/crio-ae3268accc4f63431e37c9a6bdb36d6b94770232453cb4b1095d032a3ce1ba99 WatchSource:0}: Error finding container ae3268accc4f63431e37c9a6bdb36d6b94770232453cb4b1095d032a3ce1ba99: Status 404 returned error can't find the container with id ae3268accc4f63431e37c9a6bdb36d6b94770232453cb4b1095d032a3ce1ba99 Feb 23 10:21:18 crc kubenswrapper[4904]: I0223 10:21:18.863277 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-h6d6n"] Feb 23 10:21:18 crc kubenswrapper[4904]: I0223 10:21:18.879281 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-s95c7"] Feb 23 10:21:18 crc kubenswrapper[4904]: I0223 10:21:18.898293 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-d6v74"] Feb 23 10:21:18 crc kubenswrapper[4904]: W0223 10:21:18.908744 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2e7a886_a671_44c1_909d_90224369a5e2.slice/crio-4b44ea8e1f944cf1909def20676cd1a2b3f3178ba279c5041005eec032324075 WatchSource:0}: Error finding container 4b44ea8e1f944cf1909def20676cd1a2b3f3178ba279c5041005eec032324075: Status 404 returned error can't find the container with id 4b44ea8e1f944cf1909def20676cd1a2b3f3178ba279c5041005eec032324075 Feb 23 10:21:18 crc kubenswrapper[4904]: E0223 10:21:18.928186 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-27s6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-56dc67d744-x55xc_openstack-operators(27e5aeb0-8732-493a-9ec6-ebb846416db9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 10:21:18 crc kubenswrapper[4904]: E0223 10:21:18.930868 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-x55xc" podUID="27e5aeb0-8732-493a-9ec6-ebb846416db9" Feb 23 10:21:18 crc kubenswrapper[4904]: I0223 10:21:18.934318 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68c6d499cb-hwwbw"] Feb 23 10:21:18 crc kubenswrapper[4904]: I0223 10:21:18.946357 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-x55xc"] Feb 23 10:21:18 crc kubenswrapper[4904]: E0223 10:21:18.949427 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wrlsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-8467ccb4c8-58wnl_openstack-operators(58c024f9-9e55-4e30-9dbc-8ba460e4b91d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 10:21:18 crc kubenswrapper[4904]: E0223 10:21:18.950704 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-58wnl" podUID="58c024f9-9e55-4e30-9dbc-8ba460e4b91d" Feb 23 10:21:18 crc kubenswrapper[4904]: I0223 10:21:18.957931 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-2mds6"] Feb 23 10:21:18 crc kubenswrapper[4904]: E0223 10:21:18.984226 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-brnvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-bt9jc_openstack-operators(98685ce7-3242-4e1e-94f9-bf90399619d3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 10:21:18 crc kubenswrapper[4904]: E0223 10:21:18.984918 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wfhjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-57bd55f9b7-wz2gg_openstack-operators(c46970b9-27d9-4b4e-a470-20df6b3fd44c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 10:21:18 crc kubenswrapper[4904]: E0223 10:21:18.985493 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bt9jc" podUID="98685ce7-3242-4e1e-94f9-bf90399619d3" Feb 23 10:21:18 crc kubenswrapper[4904]: E0223 10:21:18.986524 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-wz2gg" podUID="c46970b9-27d9-4b4e-a470-20df6b3fd44c" Feb 23 10:21:18 crc kubenswrapper[4904]: I0223 10:21:18.991287 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-wz2gg"] Feb 23 10:21:19 crc kubenswrapper[4904]: I0223 10:21:19.000908 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-58wnl"] Feb 23 10:21:19 crc kubenswrapper[4904]: I0223 10:21:19.007216 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bt9jc"] Feb 23 10:21:19 crc kubenswrapper[4904]: I0223 10:21:19.706360 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-wz2gg" event={"ID":"c46970b9-27d9-4b4e-a470-20df6b3fd44c","Type":"ContainerStarted","Data":"79ba56727a3894aa5b6b5f583b70304210dfee29a7e4cf62a062cc8ecbced678"} Feb 23 10:21:19 crc kubenswrapper[4904]: E0223 10:21:19.708615 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89\\\"\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-wz2gg" podUID="c46970b9-27d9-4b4e-a470-20df6b3fd44c" Feb 23 10:21:19 crc kubenswrapper[4904]: I0223 10:21:19.709008 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-x55xc" event={"ID":"27e5aeb0-8732-493a-9ec6-ebb846416db9","Type":"ContainerStarted","Data":"8e53b47939e1cdc0a6364e53dbde675156d8a85fc0c0bca9baf7bc8dbd08b281"} Feb 23 10:21:19 crc kubenswrapper[4904]: E0223 10:21:19.711221 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-x55xc" podUID="27e5aeb0-8732-493a-9ec6-ebb846416db9" Feb 23 10:21:19 crc kubenswrapper[4904]: I0223 10:21:19.712751 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-2mds6" event={"ID":"a2e7a886-a671-44c1-909d-90224369a5e2","Type":"ContainerStarted","Data":"4b44ea8e1f944cf1909def20676cd1a2b3f3178ba279c5041005eec032324075"} Feb 23 10:21:19 crc kubenswrapper[4904]: I0223 10:21:19.718148 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-d6v74" event={"ID":"81f9873a-af57-4f93-85cc-df46dbcbcde8","Type":"ContainerStarted","Data":"e628ec5d9e3364fc44a4d1198ea2b0e15957c92014781ea3f975eabdaa43caa8"} Feb 23 10:21:19 crc kubenswrapper[4904]: I0223 10:21:19.723766 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-h6d6n" event={"ID":"1944eba9-ffe2-467e-8ee8-ae7cf23aa1c4","Type":"ContainerStarted","Data":"548eea7a2bac0365147725f37ead406bae5b742572de614d163bae3a231ddef0"} Feb 23 10:21:19 crc kubenswrapper[4904]: I0223 10:21:19.727579 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bt9jc" event={"ID":"98685ce7-3242-4e1e-94f9-bf90399619d3","Type":"ContainerStarted","Data":"99aa08212090d4c32dd8d74ad9bdf166340a0ef144818b8f50c6f41d47de4403"} Feb 23 10:21:19 crc kubenswrapper[4904]: I0223 10:21:19.730389 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-hwwbw" event={"ID":"e76c5a19-592e-4739-b437-28157ab7d3d5","Type":"ContainerStarted","Data":"ea6fc4c4a8f137021a00642a0a71652ad1b446f8f81d1f00359c68e5ec6cda98"} Feb 23 10:21:19 crc kubenswrapper[4904]: E0223 10:21:19.730790 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bt9jc" podUID="98685ce7-3242-4e1e-94f9-bf90399619d3" Feb 23 10:21:19 crc kubenswrapper[4904]: I0223 10:21:19.734291 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-s95c7" event={"ID":"81c9e2b3-cfc0-400f-9534-f9f6ba8f0482","Type":"ContainerStarted","Data":"ce09f2dcea0ddbfc8c1bdd117cd37ff1a954d6c1a9ed950269a70180c5a15556"} Feb 23 10:21:19 crc kubenswrapper[4904]: I0223 10:21:19.738015 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-ccb96f8ff-gxgct" event={"ID":"d5c4df32-1ca8-4497-8ccd-30b31f71f364","Type":"ContainerStarted","Data":"3fd53f73138558f192830c92fe22202a83011014a58ddfb8a9c8696dd1444d50"} Feb 23 10:21:19 crc kubenswrapper[4904]: I0223 10:21:19.745576 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-58wnl" event={"ID":"58c024f9-9e55-4e30-9dbc-8ba460e4b91d","Type":"ContainerStarted","Data":"a651e8b9ed5db61e3ee49640944f34361e09f0bd5baf50b02aa99099ce9e0e2c"} Feb 23 10:21:19 crc kubenswrapper[4904]: E0223 10:21:19.747521 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27\\\"\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-58wnl" podUID="58c024f9-9e55-4e30-9dbc-8ba460e4b91d" Feb 23 10:21:19 crc kubenswrapper[4904]: I0223 10:21:19.749502 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-kxzq8" event={"ID":"a28a8d14-5770-426d-b7ba-f2c89f1c5f3f","Type":"ContainerStarted","Data":"d9adace181fc6b770c7ad637d60130b1210918fcaf9258ca357f67c3657508e8"} Feb 23 10:21:19 crc kubenswrapper[4904]: I0223 10:21:19.752415 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-wdw2p" event={"ID":"b76d29c0-207f-45c3-a983-6496fd95588e","Type":"ContainerStarted","Data":"ae3268accc4f63431e37c9a6bdb36d6b94770232453cb4b1095d032a3ce1ba99"} Feb 23 10:21:20 crc kubenswrapper[4904]: I0223 10:21:20.018355 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86643e54-73df-41f4-a567-6631562e465b-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5ghdl\" (UID: \"86643e54-73df-41f4-a567-6631562e465b\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5ghdl" Feb 23 10:21:20 crc kubenswrapper[4904]: E0223 10:21:20.018635 4904 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 10:21:20 crc kubenswrapper[4904]: E0223 10:21:20.018848 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86643e54-73df-41f4-a567-6631562e465b-cert podName:86643e54-73df-41f4-a567-6631562e465b nodeName:}" failed. No retries permitted until 2026-02-23 10:21:24.018819098 +0000 UTC m=+917.439192611 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86643e54-73df-41f4-a567-6631562e465b-cert") pod "infra-operator-controller-manager-66d6b5f488-5ghdl" (UID: "86643e54-73df-41f4-a567-6631562e465b") : secret "infra-operator-webhook-server-cert" not found Feb 23 10:21:20 crc kubenswrapper[4904]: I0223 10:21:20.532475 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ae6c04a-de30-4cba-8b66-740d209955b8-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr\" (UID: \"4ae6c04a-de30-4cba-8b66-740d209955b8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" Feb 23 10:21:20 crc kubenswrapper[4904]: E0223 10:21:20.532652 4904 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 10:21:20 crc kubenswrapper[4904]: E0223 10:21:20.532729 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ae6c04a-de30-4cba-8b66-740d209955b8-cert podName:4ae6c04a-de30-4cba-8b66-740d209955b8 nodeName:}" failed. No retries permitted until 2026-02-23 10:21:24.532699455 +0000 UTC m=+917.953072968 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4ae6c04a-de30-4cba-8b66-740d209955b8-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" (UID: "4ae6c04a-de30-4cba-8b66-740d209955b8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 10:21:20 crc kubenswrapper[4904]: E0223 10:21:20.779541 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89\\\"\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-wz2gg" podUID="c46970b9-27d9-4b4e-a470-20df6b3fd44c" Feb 23 10:21:20 crc kubenswrapper[4904]: E0223 10:21:20.779829 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-x55xc" podUID="27e5aeb0-8732-493a-9ec6-ebb846416db9" Feb 23 10:21:20 crc kubenswrapper[4904]: E0223 10:21:20.779961 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27\\\"\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-58wnl" podUID="58c024f9-9e55-4e30-9dbc-8ba460e4b91d" Feb 23 10:21:20 crc kubenswrapper[4904]: E0223 10:21:20.783296 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bt9jc" podUID="98685ce7-3242-4e1e-94f9-bf90399619d3" Feb 23 10:21:20 crc kubenswrapper[4904]: I0223 10:21:20.841804 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-webhook-certs\") pod \"openstack-operator-controller-manager-dd8cbd9bf-pnztz\" (UID: \"46699645-af00-4370-a5dd-c1c94361be2b\") " pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:21:20 crc kubenswrapper[4904]: I0223 10:21:20.842013 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-metrics-certs\") pod \"openstack-operator-controller-manager-dd8cbd9bf-pnztz\" (UID: \"46699645-af00-4370-a5dd-c1c94361be2b\") " pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:21:20 crc kubenswrapper[4904]: E0223 10:21:20.841961 4904 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 10:21:20 crc kubenswrapper[4904]: E0223 10:21:20.842306 4904 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 10:21:20 crc kubenswrapper[4904]: E0223 10:21:20.842535 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-metrics-certs podName:46699645-af00-4370-a5dd-c1c94361be2b nodeName:}" failed. No retries permitted until 2026-02-23 10:21:24.842512683 +0000 UTC m=+918.262886196 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-metrics-certs") pod "openstack-operator-controller-manager-dd8cbd9bf-pnztz" (UID: "46699645-af00-4370-a5dd-c1c94361be2b") : secret "metrics-server-cert" not found Feb 23 10:21:20 crc kubenswrapper[4904]: E0223 10:21:20.842933 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-webhook-certs podName:46699645-af00-4370-a5dd-c1c94361be2b nodeName:}" failed. No retries permitted until 2026-02-23 10:21:24.842881633 +0000 UTC m=+918.263255146 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-webhook-certs") pod "openstack-operator-controller-manager-dd8cbd9bf-pnztz" (UID: "46699645-af00-4370-a5dd-c1c94361be2b") : secret "webhook-server-cert" not found Feb 23 10:21:21 crc kubenswrapper[4904]: I0223 10:21:21.083615 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sq5gd"] Feb 23 10:21:21 crc kubenswrapper[4904]: I0223 10:21:21.086050 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sq5gd" Feb 23 10:21:21 crc kubenswrapper[4904]: I0223 10:21:21.130262 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sq5gd"] Feb 23 10:21:21 crc kubenswrapper[4904]: I0223 10:21:21.250941 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa04cfec-d7c7-42f9-b386-73b9de223e59-catalog-content\") pod \"redhat-marketplace-sq5gd\" (UID: \"fa04cfec-d7c7-42f9-b386-73b9de223e59\") " pod="openshift-marketplace/redhat-marketplace-sq5gd" Feb 23 10:21:21 crc kubenswrapper[4904]: I0223 10:21:21.250987 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcbhs\" (UniqueName: \"kubernetes.io/projected/fa04cfec-d7c7-42f9-b386-73b9de223e59-kube-api-access-qcbhs\") pod \"redhat-marketplace-sq5gd\" (UID: \"fa04cfec-d7c7-42f9-b386-73b9de223e59\") " pod="openshift-marketplace/redhat-marketplace-sq5gd" Feb 23 10:21:21 crc kubenswrapper[4904]: I0223 10:21:21.251094 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa04cfec-d7c7-42f9-b386-73b9de223e59-utilities\") pod \"redhat-marketplace-sq5gd\" (UID: \"fa04cfec-d7c7-42f9-b386-73b9de223e59\") " pod="openshift-marketplace/redhat-marketplace-sq5gd" Feb 23 10:21:21 crc kubenswrapper[4904]: I0223 10:21:21.355697 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa04cfec-d7c7-42f9-b386-73b9de223e59-catalog-content\") pod \"redhat-marketplace-sq5gd\" (UID: \"fa04cfec-d7c7-42f9-b386-73b9de223e59\") " pod="openshift-marketplace/redhat-marketplace-sq5gd" Feb 23 10:21:21 crc kubenswrapper[4904]: I0223 10:21:21.355768 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcbhs\" (UniqueName: \"kubernetes.io/projected/fa04cfec-d7c7-42f9-b386-73b9de223e59-kube-api-access-qcbhs\") pod \"redhat-marketplace-sq5gd\" (UID: \"fa04cfec-d7c7-42f9-b386-73b9de223e59\") " pod="openshift-marketplace/redhat-marketplace-sq5gd" Feb 23 10:21:21 crc kubenswrapper[4904]: I0223 10:21:21.355919 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa04cfec-d7c7-42f9-b386-73b9de223e59-utilities\") pod \"redhat-marketplace-sq5gd\" (UID: \"fa04cfec-d7c7-42f9-b386-73b9de223e59\") " pod="openshift-marketplace/redhat-marketplace-sq5gd" Feb 23 10:21:21 crc kubenswrapper[4904]: I0223 10:21:21.356280 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa04cfec-d7c7-42f9-b386-73b9de223e59-catalog-content\") pod \"redhat-marketplace-sq5gd\" (UID: \"fa04cfec-d7c7-42f9-b386-73b9de223e59\") " pod="openshift-marketplace/redhat-marketplace-sq5gd" Feb 23 10:21:21 crc kubenswrapper[4904]: I0223 10:21:21.356395 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa04cfec-d7c7-42f9-b386-73b9de223e59-utilities\") pod \"redhat-marketplace-sq5gd\" (UID: \"fa04cfec-d7c7-42f9-b386-73b9de223e59\") " pod="openshift-marketplace/redhat-marketplace-sq5gd" Feb 23 10:21:21 crc kubenswrapper[4904]: I0223 10:21:21.377613 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcbhs\" (UniqueName: \"kubernetes.io/projected/fa04cfec-d7c7-42f9-b386-73b9de223e59-kube-api-access-qcbhs\") pod \"redhat-marketplace-sq5gd\" (UID: \"fa04cfec-d7c7-42f9-b386-73b9de223e59\") " pod="openshift-marketplace/redhat-marketplace-sq5gd" Feb 23 10:21:21 crc kubenswrapper[4904]: I0223 10:21:21.431643 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sq5gd" Feb 23 10:21:21 crc kubenswrapper[4904]: I0223 10:21:21.785900 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vz8pr" Feb 23 10:21:21 crc kubenswrapper[4904]: I0223 10:21:21.786802 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vz8pr" Feb 23 10:21:21 crc kubenswrapper[4904]: I0223 10:21:21.906732 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vz8pr" Feb 23 10:21:22 crc kubenswrapper[4904]: I0223 10:21:22.868395 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vz8pr" Feb 23 10:21:23 crc kubenswrapper[4904]: I0223 10:21:23.681315 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bkmnj" Feb 23 10:21:23 crc kubenswrapper[4904]: I0223 10:21:23.737208 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bkmnj" Feb 23 10:21:24 crc kubenswrapper[4904]: I0223 10:21:24.111290 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86643e54-73df-41f4-a567-6631562e465b-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5ghdl\" (UID: \"86643e54-73df-41f4-a567-6631562e465b\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5ghdl" Feb 23 10:21:24 crc kubenswrapper[4904]: E0223 10:21:24.111592 4904 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 10:21:24 crc kubenswrapper[4904]: E0223 10:21:24.111662 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86643e54-73df-41f4-a567-6631562e465b-cert podName:86643e54-73df-41f4-a567-6631562e465b nodeName:}" failed. No retries permitted until 2026-02-23 10:21:32.111639962 +0000 UTC m=+925.532013485 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86643e54-73df-41f4-a567-6631562e465b-cert") pod "infra-operator-controller-manager-66d6b5f488-5ghdl" (UID: "86643e54-73df-41f4-a567-6631562e465b") : secret "infra-operator-webhook-server-cert" not found Feb 23 10:21:24 crc kubenswrapper[4904]: I0223 10:21:24.272286 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vz8pr"] Feb 23 10:21:24 crc kubenswrapper[4904]: I0223 10:21:24.633586 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ae6c04a-de30-4cba-8b66-740d209955b8-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr\" (UID: \"4ae6c04a-de30-4cba-8b66-740d209955b8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" Feb 23 10:21:24 crc kubenswrapper[4904]: E0223 10:21:24.634034 4904 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 10:21:24 crc kubenswrapper[4904]: E0223 10:21:24.634237 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ae6c04a-de30-4cba-8b66-740d209955b8-cert podName:4ae6c04a-de30-4cba-8b66-740d209955b8 nodeName:}" failed. No retries permitted until 2026-02-23 10:21:32.634194925 +0000 UTC m=+926.054568468 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4ae6c04a-de30-4cba-8b66-740d209955b8-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" (UID: "4ae6c04a-de30-4cba-8b66-740d209955b8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 10:21:24 crc kubenswrapper[4904]: I0223 10:21:24.938402 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-webhook-certs\") pod \"openstack-operator-controller-manager-dd8cbd9bf-pnztz\" (UID: \"46699645-af00-4370-a5dd-c1c94361be2b\") " pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:21:24 crc kubenswrapper[4904]: I0223 10:21:24.938755 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-metrics-certs\") pod \"openstack-operator-controller-manager-dd8cbd9bf-pnztz\" (UID: \"46699645-af00-4370-a5dd-c1c94361be2b\") " pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:21:24 crc kubenswrapper[4904]: E0223 10:21:24.938786 4904 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 10:21:24 crc kubenswrapper[4904]: E0223 10:21:24.938935 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-webhook-certs podName:46699645-af00-4370-a5dd-c1c94361be2b nodeName:}" failed. No retries permitted until 2026-02-23 10:21:32.938897498 +0000 UTC m=+926.359271181 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-webhook-certs") pod "openstack-operator-controller-manager-dd8cbd9bf-pnztz" (UID: "46699645-af00-4370-a5dd-c1c94361be2b") : secret "webhook-server-cert" not found Feb 23 10:21:24 crc kubenswrapper[4904]: E0223 10:21:24.938976 4904 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 10:21:24 crc kubenswrapper[4904]: E0223 10:21:24.939079 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-metrics-certs podName:46699645-af00-4370-a5dd-c1c94361be2b nodeName:}" failed. No retries permitted until 2026-02-23 10:21:32.939053162 +0000 UTC m=+926.359426845 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-metrics-certs") pod "openstack-operator-controller-manager-dd8cbd9bf-pnztz" (UID: "46699645-af00-4370-a5dd-c1c94361be2b") : secret "metrics-server-cert" not found Feb 23 10:21:25 crc kubenswrapper[4904]: I0223 10:21:25.831923 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vz8pr" podUID="d2bd97fa-e6d7-493c-98b4-2aae4889065e" containerName="registry-server" containerID="cri-o://48af04ff7ab66a955255028fbe32cbf0e043446e5a64aef6aa368f2898c63d6c" gracePeriod=2 Feb 23 10:21:26 crc kubenswrapper[4904]: I0223 10:21:26.470111 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bkmnj"] Feb 23 10:21:26 crc kubenswrapper[4904]: I0223 10:21:26.470339 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bkmnj" podUID="ad620ec8-50ba-4415-a0fa-c175246707b8" containerName="registry-server" containerID="cri-o://4f8988979d2fe7f612331eccaee9ee78ee317310f35c9e6857b22452d9d3625e" gracePeriod=2 Feb 23 10:21:26 crc kubenswrapper[4904]: I0223 10:21:26.842495 4904 generic.go:334] "Generic (PLEG): container finished" podID="ad620ec8-50ba-4415-a0fa-c175246707b8" containerID="4f8988979d2fe7f612331eccaee9ee78ee317310f35c9e6857b22452d9d3625e" exitCode=0 Feb 23 10:21:26 crc kubenswrapper[4904]: I0223 10:21:26.842560 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkmnj" event={"ID":"ad620ec8-50ba-4415-a0fa-c175246707b8","Type":"ContainerDied","Data":"4f8988979d2fe7f612331eccaee9ee78ee317310f35c9e6857b22452d9d3625e"} Feb 23 10:21:26 crc kubenswrapper[4904]: I0223 10:21:26.845239 4904 generic.go:334] "Generic (PLEG): container finished" podID="d2bd97fa-e6d7-493c-98b4-2aae4889065e" containerID="48af04ff7ab66a955255028fbe32cbf0e043446e5a64aef6aa368f2898c63d6c" exitCode=0 Feb 23 10:21:26 crc kubenswrapper[4904]: I0223 10:21:26.845271 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vz8pr" event={"ID":"d2bd97fa-e6d7-493c-98b4-2aae4889065e","Type":"ContainerDied","Data":"48af04ff7ab66a955255028fbe32cbf0e043446e5a64aef6aa368f2898c63d6c"} Feb 23 10:21:27 crc kubenswrapper[4904]: I0223 10:21:27.719736 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bkmnj" Feb 23 10:21:27 crc kubenswrapper[4904]: I0223 10:21:27.731958 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad620ec8-50ba-4415-a0fa-c175246707b8-utilities\") pod \"ad620ec8-50ba-4415-a0fa-c175246707b8\" (UID: \"ad620ec8-50ba-4415-a0fa-c175246707b8\") " Feb 23 10:21:27 crc kubenswrapper[4904]: I0223 10:21:27.732069 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad620ec8-50ba-4415-a0fa-c175246707b8-catalog-content\") pod \"ad620ec8-50ba-4415-a0fa-c175246707b8\" (UID: \"ad620ec8-50ba-4415-a0fa-c175246707b8\") " Feb 23 10:21:27 crc kubenswrapper[4904]: I0223 10:21:27.732098 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6mfj\" (UniqueName: \"kubernetes.io/projected/ad620ec8-50ba-4415-a0fa-c175246707b8-kube-api-access-z6mfj\") pod \"ad620ec8-50ba-4415-a0fa-c175246707b8\" (UID: \"ad620ec8-50ba-4415-a0fa-c175246707b8\") " Feb 23 10:21:27 crc kubenswrapper[4904]: I0223 10:21:27.732992 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad620ec8-50ba-4415-a0fa-c175246707b8-utilities" (OuterVolumeSpecName: "utilities") pod "ad620ec8-50ba-4415-a0fa-c175246707b8" (UID: "ad620ec8-50ba-4415-a0fa-c175246707b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:21:27 crc kubenswrapper[4904]: I0223 10:21:27.755459 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad620ec8-50ba-4415-a0fa-c175246707b8-kube-api-access-z6mfj" (OuterVolumeSpecName: "kube-api-access-z6mfj") pod "ad620ec8-50ba-4415-a0fa-c175246707b8" (UID: "ad620ec8-50ba-4415-a0fa-c175246707b8"). InnerVolumeSpecName "kube-api-access-z6mfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:21:27 crc kubenswrapper[4904]: I0223 10:21:27.796903 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vz8pr" Feb 23 10:21:27 crc kubenswrapper[4904]: I0223 10:21:27.833685 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad620ec8-50ba-4415-a0fa-c175246707b8-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:21:27 crc kubenswrapper[4904]: I0223 10:21:27.833837 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6mfj\" (UniqueName: \"kubernetes.io/projected/ad620ec8-50ba-4415-a0fa-c175246707b8-kube-api-access-z6mfj\") on node \"crc\" DevicePath \"\"" Feb 23 10:21:27 crc kubenswrapper[4904]: I0223 10:21:27.857286 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vz8pr" event={"ID":"d2bd97fa-e6d7-493c-98b4-2aae4889065e","Type":"ContainerDied","Data":"545b16a1888c31fb232acb2ea600a21be6d261cd4734c09cfe2f3473878e9818"} Feb 23 10:21:27 crc kubenswrapper[4904]: I0223 10:21:27.857335 4904 scope.go:117] "RemoveContainer" containerID="48af04ff7ab66a955255028fbe32cbf0e043446e5a64aef6aa368f2898c63d6c" Feb 23 10:21:27 crc kubenswrapper[4904]: I0223 10:21:27.857456 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vz8pr" Feb 23 10:21:27 crc kubenswrapper[4904]: I0223 10:21:27.862021 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkmnj" event={"ID":"ad620ec8-50ba-4415-a0fa-c175246707b8","Type":"ContainerDied","Data":"e77bd3a33af053614ea140dea794cce6771f51e305247121e00d8be63647b094"} Feb 23 10:21:27 crc kubenswrapper[4904]: I0223 10:21:27.862092 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bkmnj" Feb 23 10:21:27 crc kubenswrapper[4904]: I0223 10:21:27.875832 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad620ec8-50ba-4415-a0fa-c175246707b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad620ec8-50ba-4415-a0fa-c175246707b8" (UID: "ad620ec8-50ba-4415-a0fa-c175246707b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:21:27 crc kubenswrapper[4904]: I0223 10:21:27.934794 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lxsv\" (UniqueName: \"kubernetes.io/projected/d2bd97fa-e6d7-493c-98b4-2aae4889065e-kube-api-access-2lxsv\") pod \"d2bd97fa-e6d7-493c-98b4-2aae4889065e\" (UID: \"d2bd97fa-e6d7-493c-98b4-2aae4889065e\") " Feb 23 10:21:27 crc kubenswrapper[4904]: I0223 10:21:27.934873 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2bd97fa-e6d7-493c-98b4-2aae4889065e-catalog-content\") pod \"d2bd97fa-e6d7-493c-98b4-2aae4889065e\" (UID: \"d2bd97fa-e6d7-493c-98b4-2aae4889065e\") " Feb 23 10:21:27 crc kubenswrapper[4904]: I0223 10:21:27.934908 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2bd97fa-e6d7-493c-98b4-2aae4889065e-utilities\") pod \"d2bd97fa-e6d7-493c-98b4-2aae4889065e\" (UID: \"d2bd97fa-e6d7-493c-98b4-2aae4889065e\") " Feb 23 10:21:27 crc kubenswrapper[4904]: I0223 10:21:27.935263 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad620ec8-50ba-4415-a0fa-c175246707b8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:21:27 crc kubenswrapper[4904]: I0223 10:21:27.935937 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2bd97fa-e6d7-493c-98b4-2aae4889065e-utilities" (OuterVolumeSpecName: "utilities") pod "d2bd97fa-e6d7-493c-98b4-2aae4889065e" (UID: "d2bd97fa-e6d7-493c-98b4-2aae4889065e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:21:27 crc kubenswrapper[4904]: I0223 10:21:27.938008 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2bd97fa-e6d7-493c-98b4-2aae4889065e-kube-api-access-2lxsv" (OuterVolumeSpecName: "kube-api-access-2lxsv") pod "d2bd97fa-e6d7-493c-98b4-2aae4889065e" (UID: "d2bd97fa-e6d7-493c-98b4-2aae4889065e"). InnerVolumeSpecName "kube-api-access-2lxsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:21:27 crc kubenswrapper[4904]: I0223 10:21:27.993432 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2bd97fa-e6d7-493c-98b4-2aae4889065e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2bd97fa-e6d7-493c-98b4-2aae4889065e" (UID: "d2bd97fa-e6d7-493c-98b4-2aae4889065e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:21:28 crc kubenswrapper[4904]: I0223 10:21:28.036906 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2bd97fa-e6d7-493c-98b4-2aae4889065e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:21:28 crc kubenswrapper[4904]: I0223 10:21:28.036941 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2bd97fa-e6d7-493c-98b4-2aae4889065e-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:21:28 crc kubenswrapper[4904]: I0223 10:21:28.036954 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lxsv\" (UniqueName: \"kubernetes.io/projected/d2bd97fa-e6d7-493c-98b4-2aae4889065e-kube-api-access-2lxsv\") on node \"crc\" DevicePath \"\"" Feb 23 10:21:28 crc kubenswrapper[4904]: I0223 10:21:28.205544 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vz8pr"] Feb 23 10:21:28 crc kubenswrapper[4904]: I0223 10:21:28.213029 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vz8pr"] Feb 23 10:21:28 crc kubenswrapper[4904]: I0223 10:21:28.219589 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bkmnj"] Feb 23 10:21:28 crc kubenswrapper[4904]: I0223 10:21:28.224200 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bkmnj"] Feb 23 10:21:29 crc kubenswrapper[4904]: I0223 10:21:29.271648 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad620ec8-50ba-4415-a0fa-c175246707b8" path="/var/lib/kubelet/pods/ad620ec8-50ba-4415-a0fa-c175246707b8/volumes" Feb 23 10:21:29 crc kubenswrapper[4904]: I0223 10:21:29.273219 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2bd97fa-e6d7-493c-98b4-2aae4889065e" path="/var/lib/kubelet/pods/d2bd97fa-e6d7-493c-98b4-2aae4889065e/volumes" Feb 23 10:21:32 crc kubenswrapper[4904]: I0223 10:21:32.145843 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86643e54-73df-41f4-a567-6631562e465b-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5ghdl\" (UID: \"86643e54-73df-41f4-a567-6631562e465b\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5ghdl" Feb 23 10:21:32 crc kubenswrapper[4904]: I0223 10:21:32.153691 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86643e54-73df-41f4-a567-6631562e465b-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5ghdl\" (UID: \"86643e54-73df-41f4-a567-6631562e465b\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5ghdl" Feb 23 10:21:32 crc kubenswrapper[4904]: I0223 10:21:32.436148 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5ghdl" Feb 23 10:21:32 crc kubenswrapper[4904]: E0223 10:21:32.486251 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:afb68925f208ca401020ca8b7812de075a77dafe3dc30fae5c095dcbe5acbc8a" Feb 23 10:21:32 crc kubenswrapper[4904]: E0223 10:21:32.486505 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:afb68925f208ca401020ca8b7812de075a77dafe3dc30fae5c095dcbe5acbc8a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-695pq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-9595d6797-7tpwb_openstack-operators(2be0ca02-8806-415f-addc-9cd1765721dc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 10:21:32 crc kubenswrapper[4904]: E0223 10:21:32.487691 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-9595d6797-7tpwb" podUID="2be0ca02-8806-415f-addc-9cd1765721dc" Feb 23 10:21:32 crc kubenswrapper[4904]: I0223 10:21:32.660292 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ae6c04a-de30-4cba-8b66-740d209955b8-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr\" (UID: \"4ae6c04a-de30-4cba-8b66-740d209955b8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" Feb 23 10:21:32 crc kubenswrapper[4904]: E0223 10:21:32.660536 4904 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 10:21:32 crc kubenswrapper[4904]: E0223 10:21:32.660656 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ae6c04a-de30-4cba-8b66-740d209955b8-cert podName:4ae6c04a-de30-4cba-8b66-740d209955b8 nodeName:}" failed. No retries permitted until 2026-02-23 10:21:48.660625496 +0000 UTC m=+942.080999009 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4ae6c04a-de30-4cba-8b66-740d209955b8-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" (UID: "4ae6c04a-de30-4cba-8b66-740d209955b8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 10:21:32 crc kubenswrapper[4904]: E0223 10:21:32.920829 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:afb68925f208ca401020ca8b7812de075a77dafe3dc30fae5c095dcbe5acbc8a\\\"\"" pod="openstack-operators/heat-operator-controller-manager-9595d6797-7tpwb" podUID="2be0ca02-8806-415f-addc-9cd1765721dc" Feb 23 10:21:32 crc kubenswrapper[4904]: I0223 10:21:32.964885 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-webhook-certs\") pod \"openstack-operator-controller-manager-dd8cbd9bf-pnztz\" (UID: \"46699645-af00-4370-a5dd-c1c94361be2b\") " pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:21:32 crc kubenswrapper[4904]: I0223 10:21:32.964944 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-metrics-certs\") pod \"openstack-operator-controller-manager-dd8cbd9bf-pnztz\" (UID: \"46699645-af00-4370-a5dd-c1c94361be2b\") " pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:21:32 crc kubenswrapper[4904]: E0223 10:21:32.965156 4904 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 10:21:32 crc kubenswrapper[4904]: E0223 10:21:32.965225 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-metrics-certs podName:46699645-af00-4370-a5dd-c1c94361be2b nodeName:}" failed. No retries permitted until 2026-02-23 10:21:48.965203896 +0000 UTC m=+942.385577409 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-metrics-certs") pod "openstack-operator-controller-manager-dd8cbd9bf-pnztz" (UID: "46699645-af00-4370-a5dd-c1c94361be2b") : secret "metrics-server-cert" not found Feb 23 10:21:32 crc kubenswrapper[4904]: E0223 10:21:32.965374 4904 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 10:21:32 crc kubenswrapper[4904]: E0223 10:21:32.965454 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-webhook-certs podName:46699645-af00-4370-a5dd-c1c94361be2b nodeName:}" failed. No retries permitted until 2026-02-23 10:21:48.965431282 +0000 UTC m=+942.385804795 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-webhook-certs") pod "openstack-operator-controller-manager-dd8cbd9bf-pnztz" (UID: "46699645-af00-4370-a5dd-c1c94361be2b") : secret "webhook-server-cert" not found Feb 23 10:21:34 crc kubenswrapper[4904]: E0223 10:21:34.261629 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:8d65a2becf279bb8b6b1a09e273d9a2cb1ff41f85bc42ef2e4d573cbb8cbac89" Feb 23 10:21:34 crc kubenswrapper[4904]: E0223 10:21:34.261857 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:8d65a2becf279bb8b6b1a09e273d9a2cb1ff41f85bc42ef2e4d573cbb8cbac89,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4nbmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-54967dbbdf-h6d6n_openstack-operators(1944eba9-ffe2-467e-8ee8-ae7cf23aa1c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 10:21:34 crc kubenswrapper[4904]: E0223 10:21:34.263153 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-h6d6n" podUID="1944eba9-ffe2-467e-8ee8-ae7cf23aa1c4" Feb 23 10:21:34 crc kubenswrapper[4904]: E0223 10:21:34.933189 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:8d65a2becf279bb8b6b1a09e273d9a2cb1ff41f85bc42ef2e4d573cbb8cbac89\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-h6d6n" podUID="1944eba9-ffe2-467e-8ee8-ae7cf23aa1c4" Feb 23 10:21:36 crc kubenswrapper[4904]: E0223 10:21:36.896663 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 23 10:21:36 crc kubenswrapper[4904]: E0223 10:21:36.896871 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pg8zt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-n6zjc_openshift-marketplace(596e43b6-0031-4018-bce2-420a012e6458): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 10:21:36 crc kubenswrapper[4904]: E0223 10:21:36.898121 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-n6zjc" podUID="596e43b6-0031-4018-bce2-420a012e6458" Feb 23 10:21:37 crc kubenswrapper[4904]: E0223 10:21:37.469270 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-n6zjc" podUID="596e43b6-0031-4018-bce2-420a012e6458" Feb 23 10:21:37 crc kubenswrapper[4904]: E0223 10:21:37.476671 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1323d6f8e365f562bb4c1d5dcacd8aa6e2679ff9d963a73bcfd9556baf97a1dd" Feb 23 10:21:37 crc kubenswrapper[4904]: E0223 10:21:37.476854 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1323d6f8e365f562bb4c1d5dcacd8aa6e2679ff9d963a73bcfd9556baf97a1dd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j74vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-68c6d499cb-hwwbw_openstack-operators(e76c5a19-592e-4739-b437-28157ab7d3d5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 10:21:37 crc kubenswrapper[4904]: E0223 10:21:37.478448 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-hwwbw" podUID="e76c5a19-592e-4739-b437-28157ab7d3d5" Feb 23 10:21:37 crc kubenswrapper[4904]: E0223 10:21:37.959905 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1323d6f8e365f562bb4c1d5dcacd8aa6e2679ff9d963a73bcfd9556baf97a1dd\\\"\"" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-hwwbw" podUID="e76c5a19-592e-4739-b437-28157ab7d3d5" Feb 23 10:21:39 crc kubenswrapper[4904]: E0223 10:21:39.761272 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:a5f362e48eb379fd891a28080673947763f8103f443f08a01d13cd09a3123e4d" Feb 23 10:21:39 crc kubenswrapper[4904]: E0223 10:21:39.761490 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:a5f362e48eb379fd891a28080673947763f8103f443f08a01d13cd09a3123e4d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vlb9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-57746b5ff9-9gdtf_openstack-operators(13a8ec0f-4892-4d72-947d-e87ab49b3262): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 10:21:39 crc kubenswrapper[4904]: E0223 10:21:39.763430 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-9gdtf" podUID="13a8ec0f-4892-4d72-947d-e87ab49b3262" Feb 23 10:21:39 crc kubenswrapper[4904]: I0223 10:21:39.768833 4904 scope.go:117] "RemoveContainer" containerID="cdfc9f1aa47f54bbbffd436b37ed4d67fe8284bf7c5a1082b62cfc9c26fb9ab2" Feb 23 10:21:40 crc kubenswrapper[4904]: E0223 10:21:40.002101 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:a5f362e48eb379fd891a28080673947763f8103f443f08a01d13cd09a3123e4d\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-9gdtf" podUID="13a8ec0f-4892-4d72-947d-e87ab49b3262" Feb 23 10:21:40 crc kubenswrapper[4904]: E0223 10:21:40.443992 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469" Feb 23 10:21:40 crc kubenswrapper[4904]: E0223 10:21:40.444199 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sf7c6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-6c78d668d5-b8bjv_openstack-operators(d56c4104-6cd0-4d5f-b63e-1be797de40d8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 10:21:40 crc kubenswrapper[4904]: E0223 10:21:40.445802 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-b8bjv" podUID="d56c4104-6cd0-4d5f-b63e-1be797de40d8" Feb 23 10:21:41 crc kubenswrapper[4904]: E0223 10:21:41.008473 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-b8bjv" podUID="d56c4104-6cd0-4d5f-b63e-1be797de40d8" Feb 23 10:21:42 crc kubenswrapper[4904]: I0223 10:21:42.215343 4904 scope.go:117] "RemoveContainer" containerID="e6e19012a9300309f9cd0ff43b347425be6f8c97a04d81f62062499bb789ba27" Feb 23 10:21:42 crc kubenswrapper[4904]: I0223 10:21:42.822491 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sq5gd"] Feb 23 10:21:47 crc kubenswrapper[4904]: I0223 10:21:47.398857 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:21:47 crc kubenswrapper[4904]: I0223 10:21:47.399354 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:21:48 crc kubenswrapper[4904]: I0223 10:21:48.664328 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ae6c04a-de30-4cba-8b66-740d209955b8-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr\" (UID: \"4ae6c04a-de30-4cba-8b66-740d209955b8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" Feb 23 10:21:48 crc kubenswrapper[4904]: I0223 10:21:48.678036 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ae6c04a-de30-4cba-8b66-740d209955b8-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr\" (UID: \"4ae6c04a-de30-4cba-8b66-740d209955b8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" Feb 23 10:21:48 crc kubenswrapper[4904]: I0223 10:21:48.967235 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" Feb 23 10:21:48 crc kubenswrapper[4904]: I0223 10:21:48.979233 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-metrics-certs\") pod \"openstack-operator-controller-manager-dd8cbd9bf-pnztz\" (UID: \"46699645-af00-4370-a5dd-c1c94361be2b\") " pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:21:48 crc kubenswrapper[4904]: I0223 10:21:48.979531 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-webhook-certs\") pod \"openstack-operator-controller-manager-dd8cbd9bf-pnztz\" (UID: \"46699645-af00-4370-a5dd-c1c94361be2b\") " pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:21:48 crc kubenswrapper[4904]: I0223 10:21:48.986630 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-webhook-certs\") pod \"openstack-operator-controller-manager-dd8cbd9bf-pnztz\" (UID: \"46699645-af00-4370-a5dd-c1c94361be2b\") " pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:21:48 crc kubenswrapper[4904]: I0223 10:21:48.999328 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/46699645-af00-4370-a5dd-c1c94361be2b-metrics-certs\") pod \"openstack-operator-controller-manager-dd8cbd9bf-pnztz\" (UID: \"46699645-af00-4370-a5dd-c1c94361be2b\") " pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:21:49 crc kubenswrapper[4904]: I0223 10:21:49.260582 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:21:51 crc kubenswrapper[4904]: W0223 10:21:51.030746 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa04cfec_d7c7_42f9_b386_73b9de223e59.slice/crio-085c7a8716a560db45e1fc1345ec4808a0c806875293bca6c5d81273b97ba73f WatchSource:0}: Error finding container 085c7a8716a560db45e1fc1345ec4808a0c806875293bca6c5d81273b97ba73f: Status 404 returned error can't find the container with id 085c7a8716a560db45e1fc1345ec4808a0c806875293bca6c5d81273b97ba73f Feb 23 10:21:51 crc kubenswrapper[4904]: I0223 10:21:51.100329 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sq5gd" event={"ID":"fa04cfec-d7c7-42f9-b386-73b9de223e59","Type":"ContainerStarted","Data":"085c7a8716a560db45e1fc1345ec4808a0c806875293bca6c5d81273b97ba73f"} Feb 23 10:21:51 crc kubenswrapper[4904]: E0223 10:21:51.250152 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89" Feb 23 10:21:51 crc kubenswrapper[4904]: E0223 10:21:51.251013 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wfhjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-57bd55f9b7-wz2gg_openstack-operators(c46970b9-27d9-4b4e-a470-20df6b3fd44c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 10:21:51 crc kubenswrapper[4904]: E0223 10:21:51.252294 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-wz2gg" podUID="c46970b9-27d9-4b4e-a470-20df6b3fd44c" Feb 23 10:21:51 crc kubenswrapper[4904]: I0223 10:21:51.323156 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-5ghdl"] Feb 23 10:21:51 crc kubenswrapper[4904]: I0223 10:21:51.326588 4904 scope.go:117] "RemoveContainer" containerID="4f8988979d2fe7f612331eccaee9ee78ee317310f35c9e6857b22452d9d3625e" Feb 23 10:21:51 crc kubenswrapper[4904]: W0223 10:21:51.331293 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86643e54_73df_41f4_a567_6631562e465b.slice/crio-1ed7e5f7152600a5f3e8e64bab04ce3fe4ccc00dd16209babff06df93eca8edc WatchSource:0}: Error finding container 1ed7e5f7152600a5f3e8e64bab04ce3fe4ccc00dd16209babff06df93eca8edc: Status 404 returned error can't find the container with id 1ed7e5f7152600a5f3e8e64bab04ce3fe4ccc00dd16209babff06df93eca8edc Feb 23 10:21:51 crc kubenswrapper[4904]: I0223 10:21:51.628545 4904 scope.go:117] "RemoveContainer" containerID="688b96e014994a7914657789bff954d111945d35bcd909f812a07117694660f2" Feb 23 10:21:51 crc kubenswrapper[4904]: I0223 10:21:51.759863 4904 scope.go:117] "RemoveContainer" containerID="3cc162059c0c71b9de2266b8216c72da4a03333d9c757322aec70bcd219345c5" Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.013471 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz"] Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.078381 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr"] Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.129351 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5ghdl" event={"ID":"86643e54-73df-41f4-a567-6631562e465b","Type":"ContainerStarted","Data":"1ed7e5f7152600a5f3e8e64bab04ce3fe4ccc00dd16209babff06df93eca8edc"} Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.140695 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-f6gqt" event={"ID":"87c2d044-3f5b-4a7c-80a9-f70c00310af9","Type":"ContainerStarted","Data":"d898e420bf2397771593ca4b3223f00a899cd5d258ec84ee4db3275229687b66"} Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.141408 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-f6gqt" Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.170970 4904 generic.go:334] "Generic (PLEG): container finished" podID="fa04cfec-d7c7-42f9-b386-73b9de223e59" containerID="b117cb34ea8b7ee6f3f31fc7afd83011c0d82f923dfd3a17b89420f026abe06d" exitCode=0 Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.171053 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sq5gd" event={"ID":"fa04cfec-d7c7-42f9-b386-73b9de223e59","Type":"ContainerDied","Data":"b117cb34ea8b7ee6f3f31fc7afd83011c0d82f923dfd3a17b89420f026abe06d"} Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.176187 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-f6gqt" podStartSLOduration=15.01014493 podStartE2EDuration="36.176167619s" podCreationTimestamp="2026-02-23 10:21:16 +0000 UTC" firstStartedPulling="2026-02-23 10:21:18.603943426 +0000 UTC m=+912.024316939" lastFinishedPulling="2026-02-23 10:21:39.769966115 +0000 UTC m=+933.190339628" observedRunningTime="2026-02-23 10:21:52.174953964 +0000 UTC m=+945.595327477" watchObservedRunningTime="2026-02-23 10:21:52.176167619 +0000 UTC m=+945.596541132" Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.184756 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-6qtcx" event={"ID":"8101934b-1cbc-45c6-9f81-d9da4c586b55","Type":"ContainerStarted","Data":"9005dcbfdb610ee7984c71112b5e93e0cd5694f85b52df1951ef6f6c2f2c30af"} Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.185389 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-6qtcx" Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.198638 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-5btm2" event={"ID":"655fcd29-393f-400c-99a7-01cd2f54f6e8","Type":"ContainerStarted","Data":"8530c202f2b8234ab203e370d032c89569ed3cb5c01717834d64d73f43ebef4f"} Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.199186 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-5btm2" Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.226864 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-dk89f" event={"ID":"47a11eed-a07a-47f8-9b13-2fd4d7610c65","Type":"ContainerStarted","Data":"61b386b255a4a5bc61385ecbc5a9a65c99fc3d49eb09871daf6dd81ec7de797b"} Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.227889 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-dk89f" Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.271279 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-6qtcx" podStartSLOduration=14.948770553 podStartE2EDuration="37.271261896s" podCreationTimestamp="2026-02-23 10:21:15 +0000 UTC" firstStartedPulling="2026-02-23 10:21:18.099003873 +0000 UTC m=+911.519377386" lastFinishedPulling="2026-02-23 10:21:40.421495216 +0000 UTC m=+933.841868729" observedRunningTime="2026-02-23 10:21:52.267121819 +0000 UTC m=+945.687495342" watchObservedRunningTime="2026-02-23 10:21:52.271261896 +0000 UTC m=+945.691635409" Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.294980 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-5btm2" podStartSLOduration=14.650679008000001 podStartE2EDuration="37.294963199s" podCreationTimestamp="2026-02-23 10:21:15 +0000 UTC" firstStartedPulling="2026-02-23 10:21:17.124682465 +0000 UTC m=+910.545055978" lastFinishedPulling="2026-02-23 10:21:39.768966646 +0000 UTC m=+933.189340169" observedRunningTime="2026-02-23 10:21:52.291553812 +0000 UTC m=+945.711927325" watchObservedRunningTime="2026-02-23 10:21:52.294963199 +0000 UTC m=+945.715336712" Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.371135 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-hf8p4" event={"ID":"82b3f66d-f7fd-4949-bb69-a8203973ce95","Type":"ContainerStarted","Data":"9e46cf1ec83ae51071e9fe82f404f69eb79de017082e9946690dd36a38749f50"} Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.371197 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-hf8p4" Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.393909 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-kxzq8" event={"ID":"a28a8d14-5770-426d-b7ba-f2c89f1c5f3f","Type":"ContainerStarted","Data":"349a578b5c80678f3629d2913b82db6e992467fcaac40f59433f11369b13aaf6"} Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.394564 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-kxzq8" Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.401297 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-dk89f" podStartSLOduration=14.508569812 podStartE2EDuration="36.401280994s" podCreationTimestamp="2026-02-23 10:21:16 +0000 UTC" firstStartedPulling="2026-02-23 10:21:18.543979935 +0000 UTC m=+911.964353448" lastFinishedPulling="2026-02-23 10:21:40.436691117 +0000 UTC m=+933.857064630" observedRunningTime="2026-02-23 10:21:52.352802309 +0000 UTC m=+945.773175822" watchObservedRunningTime="2026-02-23 10:21:52.401280994 +0000 UTC m=+945.821654497" Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.413231 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-hf8p4" podStartSLOduration=14.329492002 podStartE2EDuration="36.413208982s" podCreationTimestamp="2026-02-23 10:21:16 +0000 UTC" firstStartedPulling="2026-02-23 10:21:18.362931419 +0000 UTC m=+911.783304932" lastFinishedPulling="2026-02-23 10:21:40.446648399 +0000 UTC m=+933.867021912" observedRunningTime="2026-02-23 10:21:52.405439262 +0000 UTC m=+945.825812765" watchObservedRunningTime="2026-02-23 10:21:52.413208982 +0000 UTC m=+945.833582495" Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.423118 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-d6v74" event={"ID":"81f9873a-af57-4f93-85cc-df46dbcbcde8","Type":"ContainerStarted","Data":"ac81ada992fdf2150f94f480bb4349c3721d638f251e3af41b0118997f13936b"} Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.423950 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-d6v74" Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.460963 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-ccb96f8ff-gxgct" event={"ID":"d5c4df32-1ca8-4497-8ccd-30b31f71f364","Type":"ContainerStarted","Data":"acfffdedfc2dc5f1f5ee5c850acee7adca764ade18c6a8ade1f71d61024b45e6"} Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.461906 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-ccb96f8ff-gxgct" Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.477432 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-kxzq8" podStartSLOduration=14.895857268 podStartE2EDuration="36.477401063s" podCreationTimestamp="2026-02-23 10:21:16 +0000 UTC" firstStartedPulling="2026-02-23 10:21:18.8399429 +0000 UTC m=+912.260316403" lastFinishedPulling="2026-02-23 10:21:40.421486685 +0000 UTC m=+933.841860198" observedRunningTime="2026-02-23 10:21:52.451239271 +0000 UTC m=+945.871612784" watchObservedRunningTime="2026-02-23 10:21:52.477401063 +0000 UTC m=+945.897774576" Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.492465 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-d6v74" podStartSLOduration=14.95375704 podStartE2EDuration="36.49244481s" podCreationTimestamp="2026-02-23 10:21:16 +0000 UTC" firstStartedPulling="2026-02-23 10:21:18.908667289 +0000 UTC m=+912.329040802" lastFinishedPulling="2026-02-23 10:21:40.447355049 +0000 UTC m=+933.867728572" observedRunningTime="2026-02-23 10:21:52.483297881 +0000 UTC m=+945.903671384" watchObservedRunningTime="2026-02-23 10:21:52.49244481 +0000 UTC m=+945.912818313" Feb 23 10:21:52 crc kubenswrapper[4904]: I0223 10:21:52.537959 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-ccb96f8ff-gxgct" podStartSLOduration=22.307931783 podStartE2EDuration="36.5379412s" podCreationTimestamp="2026-02-23 10:21:16 +0000 UTC" firstStartedPulling="2026-02-23 10:21:18.874908922 +0000 UTC m=+912.295282435" lastFinishedPulling="2026-02-23 10:21:33.104918339 +0000 UTC m=+926.525291852" observedRunningTime="2026-02-23 10:21:52.535879862 +0000 UTC m=+945.956253375" watchObservedRunningTime="2026-02-23 10:21:52.5379412 +0000 UTC m=+945.958314703" Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.504016 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-h6d6n" event={"ID":"1944eba9-ffe2-467e-8ee8-ae7cf23aa1c4","Type":"ContainerStarted","Data":"46bf536a316f642fceaab0340d28e0fca8fd542e22384481fd48d6b23b9d5df2"} Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.505459 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-h6d6n" Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.507938 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n6zjc" event={"ID":"596e43b6-0031-4018-bce2-420a012e6458","Type":"ContainerStarted","Data":"90db38f399883453e7297fda0ac6db09aae253b5a035a76820e8bd95a16a208a"} Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.513687 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-s95c7" event={"ID":"81c9e2b3-cfc0-400f-9534-f9f6ba8f0482","Type":"ContainerStarted","Data":"2c1885888aa88903e0095fb9c9f3c561d134ec76c44479a90c788cbccbd3cd6a"} Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.513989 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-s95c7" Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.515473 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-9595d6797-7tpwb" event={"ID":"2be0ca02-8806-415f-addc-9cd1765721dc","Type":"ContainerStarted","Data":"0b3df814a68a6cc64d2e22bd80e5d447f88e117c8d5f9f6a1cfffc3b470c6907"} Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.516004 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-9595d6797-7tpwb" Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.530104 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-hwwbw" event={"ID":"e76c5a19-592e-4739-b437-28157ab7d3d5","Type":"ContainerStarted","Data":"4a7601e3ad816894e91ca6d3698f26e12837541c745bc7e735ec6fe034977db5"} Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.530975 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-hwwbw" Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.532379 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-h6d6n" podStartSLOduration=4.740096458 podStartE2EDuration="37.532275805s" podCreationTimestamp="2026-02-23 10:21:16 +0000 UTC" firstStartedPulling="2026-02-23 10:21:18.887613582 +0000 UTC m=+912.307987095" lastFinishedPulling="2026-02-23 10:21:51.679792929 +0000 UTC m=+945.100166442" observedRunningTime="2026-02-23 10:21:53.531288917 +0000 UTC m=+946.951662430" watchObservedRunningTime="2026-02-23 10:21:53.532275805 +0000 UTC m=+946.952649318" Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.565594 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-s95c7" podStartSLOduration=14.265879119 podStartE2EDuration="37.56557287s" podCreationTimestamp="2026-02-23 10:21:16 +0000 UTC" firstStartedPulling="2026-02-23 10:21:18.916492741 +0000 UTC m=+912.336866254" lastFinishedPulling="2026-02-23 10:21:42.216186462 +0000 UTC m=+935.636560005" observedRunningTime="2026-02-23 10:21:53.559557639 +0000 UTC m=+946.979931152" watchObservedRunningTime="2026-02-23 10:21:53.56557287 +0000 UTC m=+946.985946383" Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.570450 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-2mds6" event={"ID":"a2e7a886-a671-44c1-909d-90224369a5e2","Type":"ContainerStarted","Data":"1cdddac3b3e9d26b061919391b6c14b1d7d9d1a3f2c58bcd1aa96785f48bc325"} Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.570604 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-2mds6" Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.601930 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-58wnl" event={"ID":"58c024f9-9e55-4e30-9dbc-8ba460e4b91d","Type":"ContainerStarted","Data":"a0ea20e602ba3a866b3ae54c8ff4c1a5572a24e6bdaedec8096738e31cfc0c01"} Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.602425 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-58wnl" Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.616084 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" event={"ID":"4ae6c04a-de30-4cba-8b66-740d209955b8","Type":"ContainerStarted","Data":"d56fac979e30c6461d26c46d5460ff06be5edebc8bfddc1ebb7da4bc3b0f176b"} Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.632923 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" event={"ID":"46699645-af00-4370-a5dd-c1c94361be2b","Type":"ContainerStarted","Data":"675189428c08d8471a1ccb60cabd08f989a882a0442a551c8455095dd314d7a1"} Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.632980 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" event={"ID":"46699645-af00-4370-a5dd-c1c94361be2b","Type":"ContainerStarted","Data":"f243d8f15025373f8a7fe51da0ed9e4a7b207e747c419398a7054b640c50e478"} Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.633087 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.656558 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bt9jc" event={"ID":"98685ce7-3242-4e1e-94f9-bf90399619d3","Type":"ContainerStarted","Data":"ea317aa6bfa9397cd1349aabc040f98528584f5289d7630adcbb642749f0c2b3"} Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.665730 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-wdw2p" event={"ID":"b76d29c0-207f-45c3-a983-6496fd95588e","Type":"ContainerStarted","Data":"7a9fca58c691e5a08aec50af6bddef07ac322ea6ec0bd0310c05ddcf533f0c7d"} Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.666151 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-wdw2p" Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.724214 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-hwwbw" podStartSLOduration=5.9945430349999995 podStartE2EDuration="38.724198229s" podCreationTimestamp="2026-02-23 10:21:15 +0000 UTC" firstStartedPulling="2026-02-23 10:21:18.947877341 +0000 UTC m=+912.368250854" lastFinishedPulling="2026-02-23 10:21:51.677532535 +0000 UTC m=+945.097906048" observedRunningTime="2026-02-23 10:21:53.721114542 +0000 UTC m=+947.141488055" watchObservedRunningTime="2026-02-23 10:21:53.724198229 +0000 UTC m=+947.144571742" Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.726466 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-9595d6797-7tpwb" podStartSLOduration=5.596832605 podStartE2EDuration="38.726459924s" podCreationTimestamp="2026-02-23 10:21:15 +0000 UTC" firstStartedPulling="2026-02-23 10:21:18.564858757 +0000 UTC m=+911.985232270" lastFinishedPulling="2026-02-23 10:21:51.694486086 +0000 UTC m=+945.114859589" observedRunningTime="2026-02-23 10:21:53.632061866 +0000 UTC m=+947.052435379" watchObservedRunningTime="2026-02-23 10:21:53.726459924 +0000 UTC m=+947.146833437" Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.738590 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-x55xc" event={"ID":"27e5aeb0-8732-493a-9ec6-ebb846416db9","Type":"ContainerStarted","Data":"811b29a403b87a7a29a9cc9bcf0797d25ffa919f6879aeefa7266d9199945dbc"} Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.755005 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-2mds6" podStartSLOduration=16.226835362 podStartE2EDuration="37.754979622s" podCreationTimestamp="2026-02-23 10:21:16 +0000 UTC" firstStartedPulling="2026-02-23 10:21:18.916859142 +0000 UTC m=+912.337232655" lastFinishedPulling="2026-02-23 10:21:40.445003402 +0000 UTC m=+933.865376915" observedRunningTime="2026-02-23 10:21:53.750624239 +0000 UTC m=+947.170997752" watchObservedRunningTime="2026-02-23 10:21:53.754979622 +0000 UTC m=+947.175353125" Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.785442 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-wdw2p" podStartSLOduration=15.562083867 podStartE2EDuration="37.785415386s" podCreationTimestamp="2026-02-23 10:21:16 +0000 UTC" firstStartedPulling="2026-02-23 10:21:18.857158208 +0000 UTC m=+912.277531721" lastFinishedPulling="2026-02-23 10:21:41.080489727 +0000 UTC m=+934.500863240" observedRunningTime="2026-02-23 10:21:53.78380258 +0000 UTC m=+947.204176093" watchObservedRunningTime="2026-02-23 10:21:53.785415386 +0000 UTC m=+947.205788899" Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.804846 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bt9jc" podStartSLOduration=5.401858618 podStartE2EDuration="37.804823726s" podCreationTimestamp="2026-02-23 10:21:16 +0000 UTC" firstStartedPulling="2026-02-23 10:21:18.983995146 +0000 UTC m=+912.404368659" lastFinishedPulling="2026-02-23 10:21:51.386960264 +0000 UTC m=+944.807333767" observedRunningTime="2026-02-23 10:21:53.804145997 +0000 UTC m=+947.224519510" watchObservedRunningTime="2026-02-23 10:21:53.804823726 +0000 UTC m=+947.225197239" Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.828973 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-58wnl" podStartSLOduration=5.447919236 podStartE2EDuration="37.828946891s" podCreationTimestamp="2026-02-23 10:21:16 +0000 UTC" firstStartedPulling="2026-02-23 10:21:18.949188549 +0000 UTC m=+912.369562062" lastFinishedPulling="2026-02-23 10:21:51.330216204 +0000 UTC m=+944.750589717" observedRunningTime="2026-02-23 10:21:53.827461978 +0000 UTC m=+947.247835491" watchObservedRunningTime="2026-02-23 10:21:53.828946891 +0000 UTC m=+947.249320404" Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.892756 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" podStartSLOduration=37.89272832 podStartE2EDuration="37.89272832s" podCreationTimestamp="2026-02-23 10:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:21:53.888961063 +0000 UTC m=+947.309334576" watchObservedRunningTime="2026-02-23 10:21:53.89272832 +0000 UTC m=+947.313101823" Feb 23 10:21:53 crc kubenswrapper[4904]: I0223 10:21:53.913879 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-x55xc" podStartSLOduration=5.512333562 podStartE2EDuration="37.913859829s" podCreationTimestamp="2026-02-23 10:21:16 +0000 UTC" firstStartedPulling="2026-02-23 10:21:18.927992207 +0000 UTC m=+912.348365720" lastFinishedPulling="2026-02-23 10:21:51.329518474 +0000 UTC m=+944.749891987" observedRunningTime="2026-02-23 10:21:53.910746711 +0000 UTC m=+947.331120224" watchObservedRunningTime="2026-02-23 10:21:53.913859829 +0000 UTC m=+947.334233332" Feb 23 10:21:54 crc kubenswrapper[4904]: I0223 10:21:54.758627 4904 generic.go:334] "Generic (PLEG): container finished" podID="596e43b6-0031-4018-bce2-420a012e6458" containerID="90db38f399883453e7297fda0ac6db09aae253b5a035a76820e8bd95a16a208a" exitCode=0 Feb 23 10:21:54 crc kubenswrapper[4904]: I0223 10:21:54.758706 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n6zjc" event={"ID":"596e43b6-0031-4018-bce2-420a012e6458","Type":"ContainerDied","Data":"90db38f399883453e7297fda0ac6db09aae253b5a035a76820e8bd95a16a208a"} Feb 23 10:21:54 crc kubenswrapper[4904]: I0223 10:21:54.761482 4904 generic.go:334] "Generic (PLEG): container finished" podID="fa04cfec-d7c7-42f9-b386-73b9de223e59" containerID="d78893676c2b20028c6b13e2ddd1c3f5ea0c2a94131dc87b126a23b23adc3891" exitCode=0 Feb 23 10:21:54 crc kubenswrapper[4904]: I0223 10:21:54.761528 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sq5gd" event={"ID":"fa04cfec-d7c7-42f9-b386-73b9de223e59","Type":"ContainerDied","Data":"d78893676c2b20028c6b13e2ddd1c3f5ea0c2a94131dc87b126a23b23adc3891"} Feb 23 10:21:56 crc kubenswrapper[4904]: I0223 10:21:56.238380 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-5btm2" Feb 23 10:21:56 crc kubenswrapper[4904]: I0223 10:21:56.307625 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-6qtcx" Feb 23 10:21:56 crc kubenswrapper[4904]: I0223 10:21:56.579008 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-dk89f" Feb 23 10:21:56 crc kubenswrapper[4904]: I0223 10:21:56.767311 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-f6gqt" Feb 23 10:21:56 crc kubenswrapper[4904]: I0223 10:21:56.858047 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-hf8p4" Feb 23 10:21:56 crc kubenswrapper[4904]: I0223 10:21:56.924024 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-d6v74" Feb 23 10:21:56 crc kubenswrapper[4904]: I0223 10:21:56.965603 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-kxzq8" Feb 23 10:21:57 crc kubenswrapper[4904]: I0223 10:21:57.141568 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-2mds6" Feb 23 10:21:57 crc kubenswrapper[4904]: I0223 10:21:57.250415 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-x55xc" Feb 23 10:21:57 crc kubenswrapper[4904]: I0223 10:21:57.252278 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-x55xc" Feb 23 10:21:57 crc kubenswrapper[4904]: I0223 10:21:57.341686 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-58wnl" Feb 23 10:21:57 crc kubenswrapper[4904]: I0223 10:21:57.434404 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-ccb96f8ff-gxgct" Feb 23 10:21:57 crc kubenswrapper[4904]: I0223 10:21:57.791988 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-9gdtf" event={"ID":"13a8ec0f-4892-4d72-947d-e87ab49b3262","Type":"ContainerStarted","Data":"37f853b264f2f75289b1fe7a64e2d89d9159ec8e97c47d4dfd7032931048c047"} Feb 23 10:21:57 crc kubenswrapper[4904]: I0223 10:21:57.793013 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-9gdtf" Feb 23 10:21:57 crc kubenswrapper[4904]: I0223 10:21:57.795170 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" event={"ID":"4ae6c04a-de30-4cba-8b66-740d209955b8","Type":"ContainerStarted","Data":"29da6dcd4e04a4cdab8332c2df44f49ee7e5a3d6fd92146c9b212b0986f75ccc"} Feb 23 10:21:57 crc kubenswrapper[4904]: I0223 10:21:57.796082 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" Feb 23 10:21:57 crc kubenswrapper[4904]: I0223 10:21:57.797540 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n6zjc" event={"ID":"596e43b6-0031-4018-bce2-420a012e6458","Type":"ContainerStarted","Data":"085cf2091ea38bb6aebbd56bedd3ff25c91aa1badd597a697bb68b498452e090"} Feb 23 10:21:57 crc kubenswrapper[4904]: I0223 10:21:57.803492 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-b8bjv" event={"ID":"d56c4104-6cd0-4d5f-b63e-1be797de40d8","Type":"ContainerStarted","Data":"2b01dbefe4b77395ac65c532b89797cff5cf2c86d77a92df8134629ab3ba5d9a"} Feb 23 10:21:57 crc kubenswrapper[4904]: I0223 10:21:57.804688 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-b8bjv" Feb 23 10:21:57 crc kubenswrapper[4904]: I0223 10:21:57.811733 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sq5gd" event={"ID":"fa04cfec-d7c7-42f9-b386-73b9de223e59","Type":"ContainerStarted","Data":"8b88ebbe693f9844abd221493f1c1ecea5e65edc4c5ca59a1bf9b6960c668bb3"} Feb 23 10:21:57 crc kubenswrapper[4904]: I0223 10:21:57.822815 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-9gdtf" podStartSLOduration=3.725192015 podStartE2EDuration="42.822782837s" podCreationTimestamp="2026-02-23 10:21:15 +0000 UTC" firstStartedPulling="2026-02-23 10:21:17.556199456 +0000 UTC m=+910.976572969" lastFinishedPulling="2026-02-23 10:21:56.653790258 +0000 UTC m=+950.074163791" observedRunningTime="2026-02-23 10:21:57.817166658 +0000 UTC m=+951.237540351" watchObservedRunningTime="2026-02-23 10:21:57.822782837 +0000 UTC m=+951.243156350" Feb 23 10:21:57 crc kubenswrapper[4904]: I0223 10:21:57.823453 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5ghdl" event={"ID":"86643e54-73df-41f4-a567-6631562e465b","Type":"ContainerStarted","Data":"8db8550b70da3cddd8ccacd12332132c76afc44f05cd0d037304da7211bab188"} Feb 23 10:21:57 crc kubenswrapper[4904]: I0223 10:21:57.823517 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5ghdl" Feb 23 10:21:57 crc kubenswrapper[4904]: I0223 10:21:57.854526 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n6zjc" podStartSLOduration=2.806529936 podStartE2EDuration="42.854488236s" podCreationTimestamp="2026-02-23 10:21:15 +0000 UTC" firstStartedPulling="2026-02-23 10:21:16.602602416 +0000 UTC m=+910.022975929" lastFinishedPulling="2026-02-23 10:21:56.650560696 +0000 UTC m=+950.070934229" observedRunningTime="2026-02-23 10:21:57.846625253 +0000 UTC m=+951.266998766" watchObservedRunningTime="2026-02-23 10:21:57.854488236 +0000 UTC m=+951.274861749" Feb 23 10:21:57 crc kubenswrapper[4904]: I0223 10:21:57.881645 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" podStartSLOduration=37.427951146 podStartE2EDuration="41.881627256s" podCreationTimestamp="2026-02-23 10:21:16 +0000 UTC" firstStartedPulling="2026-02-23 10:21:52.194309533 +0000 UTC m=+945.614683046" lastFinishedPulling="2026-02-23 10:21:56.647985633 +0000 UTC m=+950.068359156" observedRunningTime="2026-02-23 10:21:57.875604865 +0000 UTC m=+951.295978378" watchObservedRunningTime="2026-02-23 10:21:57.881627256 +0000 UTC m=+951.302000769" Feb 23 10:21:57 crc kubenswrapper[4904]: I0223 10:21:57.905486 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-b8bjv" podStartSLOduration=3.5964412279999998 podStartE2EDuration="41.905460002s" podCreationTimestamp="2026-02-23 10:21:16 +0000 UTC" firstStartedPulling="2026-02-23 10:21:18.340190264 +0000 UTC m=+911.760563777" lastFinishedPulling="2026-02-23 10:21:56.649209028 +0000 UTC m=+950.069582551" observedRunningTime="2026-02-23 10:21:57.899286827 +0000 UTC m=+951.319660340" watchObservedRunningTime="2026-02-23 10:21:57.905460002 +0000 UTC m=+951.325833515" Feb 23 10:21:57 crc kubenswrapper[4904]: I0223 10:21:57.918609 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sq5gd" podStartSLOduration=32.494873694 podStartE2EDuration="36.918583854s" podCreationTimestamp="2026-02-23 10:21:21 +0000 UTC" firstStartedPulling="2026-02-23 10:21:52.218593862 +0000 UTC m=+945.638967375" lastFinishedPulling="2026-02-23 10:21:56.642304002 +0000 UTC m=+950.062677535" observedRunningTime="2026-02-23 10:21:57.916629539 +0000 UTC m=+951.337003052" watchObservedRunningTime="2026-02-23 10:21:57.918583854 +0000 UTC m=+951.338957367" Feb 23 10:21:57 crc kubenswrapper[4904]: I0223 10:21:57.941921 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5ghdl" podStartSLOduration=36.700299377 podStartE2EDuration="41.941900846s" podCreationTimestamp="2026-02-23 10:21:16 +0000 UTC" firstStartedPulling="2026-02-23 10:21:51.341047271 +0000 UTC m=+944.761420784" lastFinishedPulling="2026-02-23 10:21:56.58264872 +0000 UTC m=+950.003022253" observedRunningTime="2026-02-23 10:21:57.933297892 +0000 UTC m=+951.353671405" watchObservedRunningTime="2026-02-23 10:21:57.941900846 +0000 UTC m=+951.362274359" Feb 23 10:21:59 crc kubenswrapper[4904]: I0223 10:21:59.272479 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-dd8cbd9bf-pnztz" Feb 23 10:22:01 crc kubenswrapper[4904]: I0223 10:22:01.432532 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sq5gd" Feb 23 10:22:01 crc kubenswrapper[4904]: I0223 10:22:01.433221 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sq5gd" Feb 23 10:22:01 crc kubenswrapper[4904]: I0223 10:22:01.513940 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sq5gd" Feb 23 10:22:02 crc kubenswrapper[4904]: I0223 10:22:02.447706 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5ghdl" Feb 23 10:22:04 crc kubenswrapper[4904]: E0223 10:22:04.258587 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89\\\"\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-wz2gg" podUID="c46970b9-27d9-4b4e-a470-20df6b3fd44c" Feb 23 10:22:05 crc kubenswrapper[4904]: I0223 10:22:05.392277 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n6zjc" Feb 23 10:22:05 crc kubenswrapper[4904]: I0223 10:22:05.392950 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n6zjc" Feb 23 10:22:05 crc kubenswrapper[4904]: I0223 10:22:05.445214 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n6zjc" Feb 23 10:22:05 crc kubenswrapper[4904]: I0223 10:22:05.964175 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n6zjc" Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.075529 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n6zjc"] Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.143436 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mm4x2"] Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.143875 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mm4x2" podUID="bb3d1d55-eafd-4635-847c-d649a2c2d3e8" containerName="registry-server" containerID="cri-o://bb2e31e0f0ba739b1684422d99760336541956d1fe262b94da9c301bc2ce6e34" gracePeriod=2 Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.272364 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-9gdtf" Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.450791 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-hwwbw" Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.492484 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-9595d6797-7tpwb" Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.534028 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-wdw2p" Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.610523 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-b8bjv" Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.659387 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mm4x2" Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.671887 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb3d1d55-eafd-4635-847c-d649a2c2d3e8-utilities\") pod \"bb3d1d55-eafd-4635-847c-d649a2c2d3e8\" (UID: \"bb3d1d55-eafd-4635-847c-d649a2c2d3e8\") " Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.672052 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxbz5\" (UniqueName: \"kubernetes.io/projected/bb3d1d55-eafd-4635-847c-d649a2c2d3e8-kube-api-access-vxbz5\") pod \"bb3d1d55-eafd-4635-847c-d649a2c2d3e8\" (UID: \"bb3d1d55-eafd-4635-847c-d649a2c2d3e8\") " Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.672166 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb3d1d55-eafd-4635-847c-d649a2c2d3e8-catalog-content\") pod \"bb3d1d55-eafd-4635-847c-d649a2c2d3e8\" (UID: \"bb3d1d55-eafd-4635-847c-d649a2c2d3e8\") " Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.672968 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb3d1d55-eafd-4635-847c-d649a2c2d3e8-utilities" (OuterVolumeSpecName: "utilities") pod "bb3d1d55-eafd-4635-847c-d649a2c2d3e8" (UID: "bb3d1d55-eafd-4635-847c-d649a2c2d3e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.690651 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb3d1d55-eafd-4635-847c-d649a2c2d3e8-kube-api-access-vxbz5" (OuterVolumeSpecName: "kube-api-access-vxbz5") pod "bb3d1d55-eafd-4635-847c-d649a2c2d3e8" (UID: "bb3d1d55-eafd-4635-847c-d649a2c2d3e8"). InnerVolumeSpecName "kube-api-access-vxbz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.742379 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb3d1d55-eafd-4635-847c-d649a2c2d3e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb3d1d55-eafd-4635-847c-d649a2c2d3e8" (UID: "bb3d1d55-eafd-4635-847c-d649a2c2d3e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.775108 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb3d1d55-eafd-4635-847c-d649a2c2d3e8-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.775162 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxbz5\" (UniqueName: \"kubernetes.io/projected/bb3d1d55-eafd-4635-847c-d649a2c2d3e8-kube-api-access-vxbz5\") on node \"crc\" DevicePath \"\"" Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.775200 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb3d1d55-eafd-4635-847c-d649a2c2d3e8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.884213 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-h6d6n" Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.910315 4904 generic.go:334] "Generic (PLEG): container finished" podID="bb3d1d55-eafd-4635-847c-d649a2c2d3e8" containerID="bb2e31e0f0ba739b1684422d99760336541956d1fe262b94da9c301bc2ce6e34" exitCode=0 Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.910445 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mm4x2" Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.910508 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm4x2" event={"ID":"bb3d1d55-eafd-4635-847c-d649a2c2d3e8","Type":"ContainerDied","Data":"bb2e31e0f0ba739b1684422d99760336541956d1fe262b94da9c301bc2ce6e34"} Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.910550 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm4x2" event={"ID":"bb3d1d55-eafd-4635-847c-d649a2c2d3e8","Type":"ContainerDied","Data":"2e9e56156f964ba8f462740e2705b32a545ab31b932090762ca9e9ed279f52d6"} Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.910573 4904 scope.go:117] "RemoveContainer" containerID="bb2e31e0f0ba739b1684422d99760336541956d1fe262b94da9c301bc2ce6e34" Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.962972 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mm4x2"] Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.964227 4904 scope.go:117] "RemoveContainer" containerID="868bc957d95cf994a1538ace0ed3b45fce61c37f96022c2b26180b7a34528abb" Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.972655 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mm4x2"] Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.997954 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-s95c7" Feb 23 10:22:06 crc kubenswrapper[4904]: I0223 10:22:06.999421 4904 scope.go:117] "RemoveContainer" containerID="5baed017a14dd418b3c2f0874a3e54adb9c75a610c1fcfdaf1bb30956677de0d" Feb 23 10:22:07 crc kubenswrapper[4904]: I0223 10:22:07.030360 4904 scope.go:117] "RemoveContainer" containerID="bb2e31e0f0ba739b1684422d99760336541956d1fe262b94da9c301bc2ce6e34" Feb 23 10:22:07 crc kubenswrapper[4904]: E0223 10:22:07.036282 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb2e31e0f0ba739b1684422d99760336541956d1fe262b94da9c301bc2ce6e34\": container with ID starting with bb2e31e0f0ba739b1684422d99760336541956d1fe262b94da9c301bc2ce6e34 not found: ID does not exist" containerID="bb2e31e0f0ba739b1684422d99760336541956d1fe262b94da9c301bc2ce6e34" Feb 23 10:22:07 crc kubenswrapper[4904]: I0223 10:22:07.036355 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb2e31e0f0ba739b1684422d99760336541956d1fe262b94da9c301bc2ce6e34"} err="failed to get container status \"bb2e31e0f0ba739b1684422d99760336541956d1fe262b94da9c301bc2ce6e34\": rpc error: code = NotFound desc = could not find container \"bb2e31e0f0ba739b1684422d99760336541956d1fe262b94da9c301bc2ce6e34\": container with ID starting with bb2e31e0f0ba739b1684422d99760336541956d1fe262b94da9c301bc2ce6e34 not found: ID does not exist" Feb 23 10:22:07 crc kubenswrapper[4904]: I0223 10:22:07.036393 4904 scope.go:117] "RemoveContainer" containerID="868bc957d95cf994a1538ace0ed3b45fce61c37f96022c2b26180b7a34528abb" Feb 23 10:22:07 crc kubenswrapper[4904]: E0223 10:22:07.040831 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"868bc957d95cf994a1538ace0ed3b45fce61c37f96022c2b26180b7a34528abb\": container with ID starting with 868bc957d95cf994a1538ace0ed3b45fce61c37f96022c2b26180b7a34528abb not found: ID does not exist" containerID="868bc957d95cf994a1538ace0ed3b45fce61c37f96022c2b26180b7a34528abb" Feb 23 10:22:07 crc kubenswrapper[4904]: I0223 10:22:07.040873 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"868bc957d95cf994a1538ace0ed3b45fce61c37f96022c2b26180b7a34528abb"} err="failed to get container status \"868bc957d95cf994a1538ace0ed3b45fce61c37f96022c2b26180b7a34528abb\": rpc error: code = NotFound desc = could not find container \"868bc957d95cf994a1538ace0ed3b45fce61c37f96022c2b26180b7a34528abb\": container with ID starting with 868bc957d95cf994a1538ace0ed3b45fce61c37f96022c2b26180b7a34528abb not found: ID does not exist" Feb 23 10:22:07 crc kubenswrapper[4904]: I0223 10:22:07.040894 4904 scope.go:117] "RemoveContainer" containerID="5baed017a14dd418b3c2f0874a3e54adb9c75a610c1fcfdaf1bb30956677de0d" Feb 23 10:22:07 crc kubenswrapper[4904]: E0223 10:22:07.042394 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5baed017a14dd418b3c2f0874a3e54adb9c75a610c1fcfdaf1bb30956677de0d\": container with ID starting with 5baed017a14dd418b3c2f0874a3e54adb9c75a610c1fcfdaf1bb30956677de0d not found: ID does not exist" containerID="5baed017a14dd418b3c2f0874a3e54adb9c75a610c1fcfdaf1bb30956677de0d" Feb 23 10:22:07 crc kubenswrapper[4904]: I0223 10:22:07.042468 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5baed017a14dd418b3c2f0874a3e54adb9c75a610c1fcfdaf1bb30956677de0d"} err="failed to get container status \"5baed017a14dd418b3c2f0874a3e54adb9c75a610c1fcfdaf1bb30956677de0d\": rpc error: code = NotFound desc = could not find container \"5baed017a14dd418b3c2f0874a3e54adb9c75a610c1fcfdaf1bb30956677de0d\": container with ID starting with 5baed017a14dd418b3c2f0874a3e54adb9c75a610c1fcfdaf1bb30956677de0d not found: ID does not exist" Feb 23 10:22:07 crc kubenswrapper[4904]: I0223 10:22:07.287160 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb3d1d55-eafd-4635-847c-d649a2c2d3e8" path="/var/lib/kubelet/pods/bb3d1d55-eafd-4635-847c-d649a2c2d3e8/volumes" Feb 23 10:22:08 crc kubenswrapper[4904]: I0223 10:22:08.974667 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr" Feb 23 10:22:11 crc kubenswrapper[4904]: I0223 10:22:11.518484 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sq5gd" Feb 23 10:22:11 crc kubenswrapper[4904]: I0223 10:22:11.579262 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sq5gd"] Feb 23 10:22:11 crc kubenswrapper[4904]: I0223 10:22:11.953956 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sq5gd" podUID="fa04cfec-d7c7-42f9-b386-73b9de223e59" containerName="registry-server" containerID="cri-o://8b88ebbe693f9844abd221493f1c1ecea5e65edc4c5ca59a1bf9b6960c668bb3" gracePeriod=2 Feb 23 10:22:12 crc kubenswrapper[4904]: I0223 10:22:12.503307 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sq5gd" Feb 23 10:22:12 crc kubenswrapper[4904]: I0223 10:22:12.578685 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa04cfec-d7c7-42f9-b386-73b9de223e59-catalog-content\") pod \"fa04cfec-d7c7-42f9-b386-73b9de223e59\" (UID: \"fa04cfec-d7c7-42f9-b386-73b9de223e59\") " Feb 23 10:22:12 crc kubenswrapper[4904]: I0223 10:22:12.578877 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa04cfec-d7c7-42f9-b386-73b9de223e59-utilities\") pod \"fa04cfec-d7c7-42f9-b386-73b9de223e59\" (UID: \"fa04cfec-d7c7-42f9-b386-73b9de223e59\") " Feb 23 10:22:12 crc kubenswrapper[4904]: I0223 10:22:12.580228 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa04cfec-d7c7-42f9-b386-73b9de223e59-utilities" (OuterVolumeSpecName: "utilities") pod "fa04cfec-d7c7-42f9-b386-73b9de223e59" (UID: "fa04cfec-d7c7-42f9-b386-73b9de223e59"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:22:12 crc kubenswrapper[4904]: I0223 10:22:12.580483 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcbhs\" (UniqueName: \"kubernetes.io/projected/fa04cfec-d7c7-42f9-b386-73b9de223e59-kube-api-access-qcbhs\") pod \"fa04cfec-d7c7-42f9-b386-73b9de223e59\" (UID: \"fa04cfec-d7c7-42f9-b386-73b9de223e59\") " Feb 23 10:22:12 crc kubenswrapper[4904]: I0223 10:22:12.580880 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa04cfec-d7c7-42f9-b386-73b9de223e59-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:22:12 crc kubenswrapper[4904]: I0223 10:22:12.590789 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa04cfec-d7c7-42f9-b386-73b9de223e59-kube-api-access-qcbhs" (OuterVolumeSpecName: "kube-api-access-qcbhs") pod "fa04cfec-d7c7-42f9-b386-73b9de223e59" (UID: "fa04cfec-d7c7-42f9-b386-73b9de223e59"). InnerVolumeSpecName "kube-api-access-qcbhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:22:12 crc kubenswrapper[4904]: I0223 10:22:12.610807 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa04cfec-d7c7-42f9-b386-73b9de223e59-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa04cfec-d7c7-42f9-b386-73b9de223e59" (UID: "fa04cfec-d7c7-42f9-b386-73b9de223e59"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:22:12 crc kubenswrapper[4904]: I0223 10:22:12.682435 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa04cfec-d7c7-42f9-b386-73b9de223e59-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:22:12 crc kubenswrapper[4904]: I0223 10:22:12.682487 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcbhs\" (UniqueName: \"kubernetes.io/projected/fa04cfec-d7c7-42f9-b386-73b9de223e59-kube-api-access-qcbhs\") on node \"crc\" DevicePath \"\"" Feb 23 10:22:12 crc kubenswrapper[4904]: I0223 10:22:12.967636 4904 generic.go:334] "Generic (PLEG): container finished" podID="fa04cfec-d7c7-42f9-b386-73b9de223e59" containerID="8b88ebbe693f9844abd221493f1c1ecea5e65edc4c5ca59a1bf9b6960c668bb3" exitCode=0 Feb 23 10:22:12 crc kubenswrapper[4904]: I0223 10:22:12.967739 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sq5gd" event={"ID":"fa04cfec-d7c7-42f9-b386-73b9de223e59","Type":"ContainerDied","Data":"8b88ebbe693f9844abd221493f1c1ecea5e65edc4c5ca59a1bf9b6960c668bb3"} Feb 23 10:22:12 crc kubenswrapper[4904]: I0223 10:22:12.967781 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sq5gd" Feb 23 10:22:12 crc kubenswrapper[4904]: I0223 10:22:12.967829 4904 scope.go:117] "RemoveContainer" containerID="8b88ebbe693f9844abd221493f1c1ecea5e65edc4c5ca59a1bf9b6960c668bb3" Feb 23 10:22:12 crc kubenswrapper[4904]: I0223 10:22:12.967805 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sq5gd" event={"ID":"fa04cfec-d7c7-42f9-b386-73b9de223e59","Type":"ContainerDied","Data":"085c7a8716a560db45e1fc1345ec4808a0c806875293bca6c5d81273b97ba73f"} Feb 23 10:22:12 crc kubenswrapper[4904]: I0223 10:22:12.995921 4904 scope.go:117] "RemoveContainer" containerID="d78893676c2b20028c6b13e2ddd1c3f5ea0c2a94131dc87b126a23b23adc3891" Feb 23 10:22:13 crc kubenswrapper[4904]: I0223 10:22:13.022670 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sq5gd"] Feb 23 10:22:13 crc kubenswrapper[4904]: I0223 10:22:13.025738 4904 scope.go:117] "RemoveContainer" containerID="b117cb34ea8b7ee6f3f31fc7afd83011c0d82f923dfd3a17b89420f026abe06d" Feb 23 10:22:13 crc kubenswrapper[4904]: I0223 10:22:13.032582 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sq5gd"] Feb 23 10:22:13 crc kubenswrapper[4904]: I0223 10:22:13.062818 4904 scope.go:117] "RemoveContainer" containerID="8b88ebbe693f9844abd221493f1c1ecea5e65edc4c5ca59a1bf9b6960c668bb3" Feb 23 10:22:13 crc kubenswrapper[4904]: E0223 10:22:13.063649 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b88ebbe693f9844abd221493f1c1ecea5e65edc4c5ca59a1bf9b6960c668bb3\": container with ID starting with 8b88ebbe693f9844abd221493f1c1ecea5e65edc4c5ca59a1bf9b6960c668bb3 not found: ID does not exist" containerID="8b88ebbe693f9844abd221493f1c1ecea5e65edc4c5ca59a1bf9b6960c668bb3" Feb 23 10:22:13 crc kubenswrapper[4904]: I0223 10:22:13.063743 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b88ebbe693f9844abd221493f1c1ecea5e65edc4c5ca59a1bf9b6960c668bb3"} err="failed to get container status \"8b88ebbe693f9844abd221493f1c1ecea5e65edc4c5ca59a1bf9b6960c668bb3\": rpc error: code = NotFound desc = could not find container \"8b88ebbe693f9844abd221493f1c1ecea5e65edc4c5ca59a1bf9b6960c668bb3\": container with ID starting with 8b88ebbe693f9844abd221493f1c1ecea5e65edc4c5ca59a1bf9b6960c668bb3 not found: ID does not exist" Feb 23 10:22:13 crc kubenswrapper[4904]: I0223 10:22:13.063790 4904 scope.go:117] "RemoveContainer" containerID="d78893676c2b20028c6b13e2ddd1c3f5ea0c2a94131dc87b126a23b23adc3891" Feb 23 10:22:13 crc kubenswrapper[4904]: E0223 10:22:13.064555 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d78893676c2b20028c6b13e2ddd1c3f5ea0c2a94131dc87b126a23b23adc3891\": container with ID starting with d78893676c2b20028c6b13e2ddd1c3f5ea0c2a94131dc87b126a23b23adc3891 not found: ID does not exist" containerID="d78893676c2b20028c6b13e2ddd1c3f5ea0c2a94131dc87b126a23b23adc3891" Feb 23 10:22:13 crc kubenswrapper[4904]: I0223 10:22:13.064698 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d78893676c2b20028c6b13e2ddd1c3f5ea0c2a94131dc87b126a23b23adc3891"} err="failed to get container status \"d78893676c2b20028c6b13e2ddd1c3f5ea0c2a94131dc87b126a23b23adc3891\": rpc error: code = NotFound desc = could not find container \"d78893676c2b20028c6b13e2ddd1c3f5ea0c2a94131dc87b126a23b23adc3891\": container with ID starting with d78893676c2b20028c6b13e2ddd1c3f5ea0c2a94131dc87b126a23b23adc3891 not found: ID does not exist" Feb 23 10:22:13 crc kubenswrapper[4904]: I0223 10:22:13.064767 4904 scope.go:117] "RemoveContainer" containerID="b117cb34ea8b7ee6f3f31fc7afd83011c0d82f923dfd3a17b89420f026abe06d" Feb 23 10:22:13 crc kubenswrapper[4904]: E0223 10:22:13.065225 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b117cb34ea8b7ee6f3f31fc7afd83011c0d82f923dfd3a17b89420f026abe06d\": container with ID starting with b117cb34ea8b7ee6f3f31fc7afd83011c0d82f923dfd3a17b89420f026abe06d not found: ID does not exist" containerID="b117cb34ea8b7ee6f3f31fc7afd83011c0d82f923dfd3a17b89420f026abe06d" Feb 23 10:22:13 crc kubenswrapper[4904]: I0223 10:22:13.065266 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b117cb34ea8b7ee6f3f31fc7afd83011c0d82f923dfd3a17b89420f026abe06d"} err="failed to get container status \"b117cb34ea8b7ee6f3f31fc7afd83011c0d82f923dfd3a17b89420f026abe06d\": rpc error: code = NotFound desc = could not find container \"b117cb34ea8b7ee6f3f31fc7afd83011c0d82f923dfd3a17b89420f026abe06d\": container with ID starting with b117cb34ea8b7ee6f3f31fc7afd83011c0d82f923dfd3a17b89420f026abe06d not found: ID does not exist" Feb 23 10:22:13 crc kubenswrapper[4904]: I0223 10:22:13.272000 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa04cfec-d7c7-42f9-b386-73b9de223e59" path="/var/lib/kubelet/pods/fa04cfec-d7c7-42f9-b386-73b9de223e59/volumes" Feb 23 10:22:17 crc kubenswrapper[4904]: I0223 10:22:17.398261 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:22:17 crc kubenswrapper[4904]: I0223 10:22:17.399129 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:22:18 crc kubenswrapper[4904]: I0223 10:22:18.014170 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-wz2gg" event={"ID":"c46970b9-27d9-4b4e-a470-20df6b3fd44c","Type":"ContainerStarted","Data":"e4beb3f198410e6fc7381fbb06faa73ec65a2cea920c1ada3f39801e29718735"} Feb 23 10:22:18 crc kubenswrapper[4904]: I0223 10:22:18.014420 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-wz2gg" Feb 23 10:22:18 crc kubenswrapper[4904]: I0223 10:22:18.036592 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-wz2gg" podStartSLOduration=4.145983636 podStartE2EDuration="1m2.036560625s" podCreationTimestamp="2026-02-23 10:21:16 +0000 UTC" firstStartedPulling="2026-02-23 10:21:18.9841325 +0000 UTC m=+912.404506013" lastFinishedPulling="2026-02-23 10:22:16.874709449 +0000 UTC m=+970.295083002" observedRunningTime="2026-02-23 10:22:18.034594729 +0000 UTC m=+971.454968252" watchObservedRunningTime="2026-02-23 10:22:18.036560625 +0000 UTC m=+971.456934148" Feb 23 10:22:27 crc kubenswrapper[4904]: I0223 10:22:27.093113 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-wz2gg" Feb 23 10:22:47 crc kubenswrapper[4904]: I0223 10:22:47.397989 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:22:47 crc kubenswrapper[4904]: I0223 10:22:47.398847 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:22:47 crc kubenswrapper[4904]: I0223 10:22:47.398927 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:22:47 crc kubenswrapper[4904]: I0223 10:22:47.400004 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"65119bff18e2a117e57ca57b960a2723a6ad4c2ce44063bd803ebeebee5b384d"} pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 10:22:47 crc kubenswrapper[4904]: I0223 10:22:47.400106 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" containerID="cri-o://65119bff18e2a117e57ca57b960a2723a6ad4c2ce44063bd803ebeebee5b384d" gracePeriod=600 Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.290767 4904 generic.go:334] "Generic (PLEG): container finished" podID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerID="65119bff18e2a117e57ca57b960a2723a6ad4c2ce44063bd803ebeebee5b384d" exitCode=0 Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.290840 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerDied","Data":"65119bff18e2a117e57ca57b960a2723a6ad4c2ce44063bd803ebeebee5b384d"} Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.291445 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerStarted","Data":"a6264b62be7a8acc04b5529c5a569156f7d3e2773a196aa33b2133b46c2a62f4"} Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.291489 4904 scope.go:117] "RemoveContainer" containerID="5bd1bf756dbf679d7fd1d8585fe9574e3cbefdb7a18e5dc940344b15e289c6d4" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.559378 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f8vcl"] Feb 23 10:22:48 crc kubenswrapper[4904]: E0223 10:22:48.559913 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad620ec8-50ba-4415-a0fa-c175246707b8" containerName="extract-utilities" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.559924 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad620ec8-50ba-4415-a0fa-c175246707b8" containerName="extract-utilities" Feb 23 10:22:48 crc kubenswrapper[4904]: E0223 10:22:48.559936 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa04cfec-d7c7-42f9-b386-73b9de223e59" containerName="extract-content" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.559942 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa04cfec-d7c7-42f9-b386-73b9de223e59" containerName="extract-content" Feb 23 10:22:48 crc kubenswrapper[4904]: E0223 10:22:48.559949 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa04cfec-d7c7-42f9-b386-73b9de223e59" containerName="registry-server" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.559956 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa04cfec-d7c7-42f9-b386-73b9de223e59" containerName="registry-server" Feb 23 10:22:48 crc kubenswrapper[4904]: E0223 10:22:48.559967 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad620ec8-50ba-4415-a0fa-c175246707b8" containerName="registry-server" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.559972 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad620ec8-50ba-4415-a0fa-c175246707b8" containerName="registry-server" Feb 23 10:22:48 crc kubenswrapper[4904]: E0223 10:22:48.559981 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa04cfec-d7c7-42f9-b386-73b9de223e59" containerName="extract-utilities" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.559986 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa04cfec-d7c7-42f9-b386-73b9de223e59" containerName="extract-utilities" Feb 23 10:22:48 crc kubenswrapper[4904]: E0223 10:22:48.559994 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3d1d55-eafd-4635-847c-d649a2c2d3e8" containerName="extract-utilities" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.560000 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3d1d55-eafd-4635-847c-d649a2c2d3e8" containerName="extract-utilities" Feb 23 10:22:48 crc kubenswrapper[4904]: E0223 10:22:48.560009 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3d1d55-eafd-4635-847c-d649a2c2d3e8" containerName="registry-server" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.560014 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3d1d55-eafd-4635-847c-d649a2c2d3e8" containerName="registry-server" Feb 23 10:22:48 crc kubenswrapper[4904]: E0223 10:22:48.560027 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad620ec8-50ba-4415-a0fa-c175246707b8" containerName="extract-content" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.560034 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad620ec8-50ba-4415-a0fa-c175246707b8" containerName="extract-content" Feb 23 10:22:48 crc kubenswrapper[4904]: E0223 10:22:48.560047 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3d1d55-eafd-4635-847c-d649a2c2d3e8" containerName="extract-content" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.560052 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3d1d55-eafd-4635-847c-d649a2c2d3e8" containerName="extract-content" Feb 23 10:22:48 crc kubenswrapper[4904]: E0223 10:22:48.560064 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2bd97fa-e6d7-493c-98b4-2aae4889065e" containerName="registry-server" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.560070 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2bd97fa-e6d7-493c-98b4-2aae4889065e" containerName="registry-server" Feb 23 10:22:48 crc kubenswrapper[4904]: E0223 10:22:48.560077 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2bd97fa-e6d7-493c-98b4-2aae4889065e" containerName="extract-content" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.560083 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2bd97fa-e6d7-493c-98b4-2aae4889065e" containerName="extract-content" Feb 23 10:22:48 crc kubenswrapper[4904]: E0223 10:22:48.560094 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2bd97fa-e6d7-493c-98b4-2aae4889065e" containerName="extract-utilities" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.560099 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2bd97fa-e6d7-493c-98b4-2aae4889065e" containerName="extract-utilities" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.560219 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2bd97fa-e6d7-493c-98b4-2aae4889065e" containerName="registry-server" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.560230 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb3d1d55-eafd-4635-847c-d649a2c2d3e8" containerName="registry-server" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.560241 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa04cfec-d7c7-42f9-b386-73b9de223e59" containerName="registry-server" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.560254 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad620ec8-50ba-4415-a0fa-c175246707b8" containerName="registry-server" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.561009 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-f8vcl" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.565376 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.566417 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-kjbft" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.566589 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.566950 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.573390 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f8vcl"] Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.650794 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4sjr7"] Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.652493 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-4sjr7" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.656934 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.661680 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4sjr7"] Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.742029 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpsnh\" (UniqueName: \"kubernetes.io/projected/01ececcd-713f-4c35-bcdc-f186f8c1b081-kube-api-access-zpsnh\") pod \"dnsmasq-dns-78dd6ddcc-4sjr7\" (UID: \"01ececcd-713f-4c35-bcdc-f186f8c1b081\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4sjr7" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.742097 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01ececcd-713f-4c35-bcdc-f186f8c1b081-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-4sjr7\" (UID: \"01ececcd-713f-4c35-bcdc-f186f8c1b081\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4sjr7" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.742222 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/379438a2-08e2-4bda-a112-372068f4c001-config\") pod \"dnsmasq-dns-675f4bcbfc-f8vcl\" (UID: \"379438a2-08e2-4bda-a112-372068f4c001\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f8vcl" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.742253 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br9pq\" (UniqueName: \"kubernetes.io/projected/379438a2-08e2-4bda-a112-372068f4c001-kube-api-access-br9pq\") pod \"dnsmasq-dns-675f4bcbfc-f8vcl\" (UID: \"379438a2-08e2-4bda-a112-372068f4c001\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f8vcl" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.742275 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ececcd-713f-4c35-bcdc-f186f8c1b081-config\") pod \"dnsmasq-dns-78dd6ddcc-4sjr7\" (UID: \"01ececcd-713f-4c35-bcdc-f186f8c1b081\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4sjr7" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.843317 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpsnh\" (UniqueName: \"kubernetes.io/projected/01ececcd-713f-4c35-bcdc-f186f8c1b081-kube-api-access-zpsnh\") pod \"dnsmasq-dns-78dd6ddcc-4sjr7\" (UID: \"01ececcd-713f-4c35-bcdc-f186f8c1b081\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4sjr7" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.843382 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01ececcd-713f-4c35-bcdc-f186f8c1b081-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-4sjr7\" (UID: \"01ececcd-713f-4c35-bcdc-f186f8c1b081\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4sjr7" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.843453 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/379438a2-08e2-4bda-a112-372068f4c001-config\") pod \"dnsmasq-dns-675f4bcbfc-f8vcl\" (UID: \"379438a2-08e2-4bda-a112-372068f4c001\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f8vcl" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.843481 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br9pq\" (UniqueName: \"kubernetes.io/projected/379438a2-08e2-4bda-a112-372068f4c001-kube-api-access-br9pq\") pod \"dnsmasq-dns-675f4bcbfc-f8vcl\" (UID: \"379438a2-08e2-4bda-a112-372068f4c001\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f8vcl" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.843500 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ececcd-713f-4c35-bcdc-f186f8c1b081-config\") pod \"dnsmasq-dns-78dd6ddcc-4sjr7\" (UID: \"01ececcd-713f-4c35-bcdc-f186f8c1b081\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4sjr7" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.844907 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ececcd-713f-4c35-bcdc-f186f8c1b081-config\") pod \"dnsmasq-dns-78dd6ddcc-4sjr7\" (UID: \"01ececcd-713f-4c35-bcdc-f186f8c1b081\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4sjr7" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.844938 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01ececcd-713f-4c35-bcdc-f186f8c1b081-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-4sjr7\" (UID: \"01ececcd-713f-4c35-bcdc-f186f8c1b081\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4sjr7" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.845137 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/379438a2-08e2-4bda-a112-372068f4c001-config\") pod \"dnsmasq-dns-675f4bcbfc-f8vcl\" (UID: \"379438a2-08e2-4bda-a112-372068f4c001\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f8vcl" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.867848 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br9pq\" (UniqueName: \"kubernetes.io/projected/379438a2-08e2-4bda-a112-372068f4c001-kube-api-access-br9pq\") pod \"dnsmasq-dns-675f4bcbfc-f8vcl\" (UID: \"379438a2-08e2-4bda-a112-372068f4c001\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f8vcl" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.867845 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpsnh\" (UniqueName: \"kubernetes.io/projected/01ececcd-713f-4c35-bcdc-f186f8c1b081-kube-api-access-zpsnh\") pod \"dnsmasq-dns-78dd6ddcc-4sjr7\" (UID: \"01ececcd-713f-4c35-bcdc-f186f8c1b081\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4sjr7" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.887491 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-f8vcl" Feb 23 10:22:48 crc kubenswrapper[4904]: I0223 10:22:48.982152 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-4sjr7" Feb 23 10:22:49 crc kubenswrapper[4904]: I0223 10:22:49.385668 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f8vcl"] Feb 23 10:22:49 crc kubenswrapper[4904]: W0223 10:22:49.493932 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01ececcd_713f_4c35_bcdc_f186f8c1b081.slice/crio-1dd275f2c6931185b715b7a193b55ee204267d950f7d6cf53acd040a6bac390b WatchSource:0}: Error finding container 1dd275f2c6931185b715b7a193b55ee204267d950f7d6cf53acd040a6bac390b: Status 404 returned error can't find the container with id 1dd275f2c6931185b715b7a193b55ee204267d950f7d6cf53acd040a6bac390b Feb 23 10:22:49 crc kubenswrapper[4904]: I0223 10:22:49.496066 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4sjr7"] Feb 23 10:22:50 crc kubenswrapper[4904]: I0223 10:22:50.313249 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-4sjr7" event={"ID":"01ececcd-713f-4c35-bcdc-f186f8c1b081","Type":"ContainerStarted","Data":"1dd275f2c6931185b715b7a193b55ee204267d950f7d6cf53acd040a6bac390b"} Feb 23 10:22:50 crc kubenswrapper[4904]: I0223 10:22:50.316372 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-f8vcl" event={"ID":"379438a2-08e2-4bda-a112-372068f4c001","Type":"ContainerStarted","Data":"e5459c356688a0a2538ac49e7b7aa09a6a61f509f4564b5636b29601b20b9228"} Feb 23 10:22:51 crc kubenswrapper[4904]: I0223 10:22:51.443655 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f8vcl"] Feb 23 10:22:51 crc kubenswrapper[4904]: I0223 10:22:51.469137 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-r72t2"] Feb 23 10:22:51 crc kubenswrapper[4904]: I0223 10:22:51.472434 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-r72t2" Feb 23 10:22:51 crc kubenswrapper[4904]: I0223 10:22:51.482963 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-r72t2"] Feb 23 10:22:51 crc kubenswrapper[4904]: I0223 10:22:51.607825 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95e0989c-1669-4e5b-99ed-bab3cadf50f6-config\") pod \"dnsmasq-dns-666b6646f7-r72t2\" (UID: \"95e0989c-1669-4e5b-99ed-bab3cadf50f6\") " pod="openstack/dnsmasq-dns-666b6646f7-r72t2" Feb 23 10:22:51 crc kubenswrapper[4904]: I0223 10:22:51.607919 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95e0989c-1669-4e5b-99ed-bab3cadf50f6-dns-svc\") pod \"dnsmasq-dns-666b6646f7-r72t2\" (UID: \"95e0989c-1669-4e5b-99ed-bab3cadf50f6\") " pod="openstack/dnsmasq-dns-666b6646f7-r72t2" Feb 23 10:22:51 crc kubenswrapper[4904]: I0223 10:22:51.607965 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc94b\" (UniqueName: \"kubernetes.io/projected/95e0989c-1669-4e5b-99ed-bab3cadf50f6-kube-api-access-nc94b\") pod \"dnsmasq-dns-666b6646f7-r72t2\" (UID: \"95e0989c-1669-4e5b-99ed-bab3cadf50f6\") " pod="openstack/dnsmasq-dns-666b6646f7-r72t2" Feb 23 10:22:51 crc kubenswrapper[4904]: I0223 10:22:51.709140 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc94b\" (UniqueName: \"kubernetes.io/projected/95e0989c-1669-4e5b-99ed-bab3cadf50f6-kube-api-access-nc94b\") pod \"dnsmasq-dns-666b6646f7-r72t2\" (UID: \"95e0989c-1669-4e5b-99ed-bab3cadf50f6\") " pod="openstack/dnsmasq-dns-666b6646f7-r72t2" Feb 23 10:22:51 crc kubenswrapper[4904]: I0223 10:22:51.709235 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95e0989c-1669-4e5b-99ed-bab3cadf50f6-config\") pod \"dnsmasq-dns-666b6646f7-r72t2\" (UID: \"95e0989c-1669-4e5b-99ed-bab3cadf50f6\") " pod="openstack/dnsmasq-dns-666b6646f7-r72t2" Feb 23 10:22:51 crc kubenswrapper[4904]: I0223 10:22:51.709287 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95e0989c-1669-4e5b-99ed-bab3cadf50f6-dns-svc\") pod \"dnsmasq-dns-666b6646f7-r72t2\" (UID: \"95e0989c-1669-4e5b-99ed-bab3cadf50f6\") " pod="openstack/dnsmasq-dns-666b6646f7-r72t2" Feb 23 10:22:51 crc kubenswrapper[4904]: I0223 10:22:51.710113 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95e0989c-1669-4e5b-99ed-bab3cadf50f6-dns-svc\") pod \"dnsmasq-dns-666b6646f7-r72t2\" (UID: \"95e0989c-1669-4e5b-99ed-bab3cadf50f6\") " pod="openstack/dnsmasq-dns-666b6646f7-r72t2" Feb 23 10:22:51 crc kubenswrapper[4904]: I0223 10:22:51.710919 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95e0989c-1669-4e5b-99ed-bab3cadf50f6-config\") pod \"dnsmasq-dns-666b6646f7-r72t2\" (UID: \"95e0989c-1669-4e5b-99ed-bab3cadf50f6\") " pod="openstack/dnsmasq-dns-666b6646f7-r72t2" Feb 23 10:22:51 crc kubenswrapper[4904]: I0223 10:22:51.735618 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc94b\" (UniqueName: \"kubernetes.io/projected/95e0989c-1669-4e5b-99ed-bab3cadf50f6-kube-api-access-nc94b\") pod \"dnsmasq-dns-666b6646f7-r72t2\" (UID: \"95e0989c-1669-4e5b-99ed-bab3cadf50f6\") " pod="openstack/dnsmasq-dns-666b6646f7-r72t2" Feb 23 10:22:51 crc kubenswrapper[4904]: I0223 10:22:51.807086 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-r72t2" Feb 23 10:22:51 crc kubenswrapper[4904]: I0223 10:22:51.831452 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4sjr7"] Feb 23 10:22:51 crc kubenswrapper[4904]: I0223 10:22:51.862797 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kj95b"] Feb 23 10:22:51 crc kubenswrapper[4904]: I0223 10:22:51.864661 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-kj95b" Feb 23 10:22:51 crc kubenswrapper[4904]: I0223 10:22:51.879898 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kj95b"] Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.023754 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-kj95b\" (UID: \"0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58\") " pod="openstack/dnsmasq-dns-57d769cc4f-kj95b" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.023823 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xvzd\" (UniqueName: \"kubernetes.io/projected/0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58-kube-api-access-2xvzd\") pod \"dnsmasq-dns-57d769cc4f-kj95b\" (UID: \"0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58\") " pod="openstack/dnsmasq-dns-57d769cc4f-kj95b" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.023885 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58-config\") pod \"dnsmasq-dns-57d769cc4f-kj95b\" (UID: \"0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58\") " pod="openstack/dnsmasq-dns-57d769cc4f-kj95b" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.126139 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58-config\") pod \"dnsmasq-dns-57d769cc4f-kj95b\" (UID: \"0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58\") " pod="openstack/dnsmasq-dns-57d769cc4f-kj95b" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.126276 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-kj95b\" (UID: \"0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58\") " pod="openstack/dnsmasq-dns-57d769cc4f-kj95b" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.126322 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xvzd\" (UniqueName: \"kubernetes.io/projected/0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58-kube-api-access-2xvzd\") pod \"dnsmasq-dns-57d769cc4f-kj95b\" (UID: \"0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58\") " pod="openstack/dnsmasq-dns-57d769cc4f-kj95b" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.128096 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-kj95b\" (UID: \"0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58\") " pod="openstack/dnsmasq-dns-57d769cc4f-kj95b" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.128467 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58-config\") pod \"dnsmasq-dns-57d769cc4f-kj95b\" (UID: \"0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58\") " pod="openstack/dnsmasq-dns-57d769cc4f-kj95b" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.176399 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xvzd\" (UniqueName: \"kubernetes.io/projected/0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58-kube-api-access-2xvzd\") pod \"dnsmasq-dns-57d769cc4f-kj95b\" (UID: \"0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58\") " pod="openstack/dnsmasq-dns-57d769cc4f-kj95b" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.244916 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-kj95b" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.530836 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-r72t2"] Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.629481 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.631652 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.634877 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.635043 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.637256 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-gvtt5" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.638244 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.638428 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.638883 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.639323 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.640174 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.744770 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.748060 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-config-data\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.748260 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.748361 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.748496 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.748526 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.748573 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb7bl\" (UniqueName: \"kubernetes.io/projected/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-kube-api-access-bb7bl\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.748617 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.749038 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.749059 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.749154 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.801216 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kj95b"] Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.856100 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.856291 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.856325 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.856372 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb7bl\" (UniqueName: \"kubernetes.io/projected/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-kube-api-access-bb7bl\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.856412 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.856463 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.880677 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.882564 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.880472 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.880636 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.882477 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.859293 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.883509 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.888863 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-config-data\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.888924 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.866644 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.887090 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.886983 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.891671 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-config-data\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.896133 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.898501 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.906486 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb7bl\" (UniqueName: \"kubernetes.io/projected/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-kube-api-access-bb7bl\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.935311 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " pod="openstack/rabbitmq-server-0" Feb 23 10:22:52 crc kubenswrapper[4904]: I0223 10:22:52.972960 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.024354 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.026873 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.031037 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.031122 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.031393 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.031527 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9vsnr" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.031637 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.031856 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.031993 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.094548 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e626c7f2-db46-4757-bd05-eedfba7b5fc8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.094702 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e626c7f2-db46-4757-bd05-eedfba7b5fc8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.094770 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e626c7f2-db46-4757-bd05-eedfba7b5fc8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.094848 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e626c7f2-db46-4757-bd05-eedfba7b5fc8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.094990 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e626c7f2-db46-4757-bd05-eedfba7b5fc8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.095028 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.095073 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckll7\" (UniqueName: \"kubernetes.io/projected/e626c7f2-db46-4757-bd05-eedfba7b5fc8-kube-api-access-ckll7\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.095104 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e626c7f2-db46-4757-bd05-eedfba7b5fc8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.095159 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e626c7f2-db46-4757-bd05-eedfba7b5fc8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.095207 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e626c7f2-db46-4757-bd05-eedfba7b5fc8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.095229 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e626c7f2-db46-4757-bd05-eedfba7b5fc8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.097417 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.196960 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e626c7f2-db46-4757-bd05-eedfba7b5fc8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.197033 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e626c7f2-db46-4757-bd05-eedfba7b5fc8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.197063 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e626c7f2-db46-4757-bd05-eedfba7b5fc8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.197081 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e626c7f2-db46-4757-bd05-eedfba7b5fc8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.197120 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e626c7f2-db46-4757-bd05-eedfba7b5fc8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.197144 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.197194 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckll7\" (UniqueName: \"kubernetes.io/projected/e626c7f2-db46-4757-bd05-eedfba7b5fc8-kube-api-access-ckll7\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.197215 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e626c7f2-db46-4757-bd05-eedfba7b5fc8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.197291 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e626c7f2-db46-4757-bd05-eedfba7b5fc8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.197357 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e626c7f2-db46-4757-bd05-eedfba7b5fc8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.197376 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e626c7f2-db46-4757-bd05-eedfba7b5fc8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.199296 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e626c7f2-db46-4757-bd05-eedfba7b5fc8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.199931 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.201892 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e626c7f2-db46-4757-bd05-eedfba7b5fc8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.202749 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e626c7f2-db46-4757-bd05-eedfba7b5fc8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.207644 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e626c7f2-db46-4757-bd05-eedfba7b5fc8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.208509 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e626c7f2-db46-4757-bd05-eedfba7b5fc8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.210895 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e626c7f2-db46-4757-bd05-eedfba7b5fc8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.210894 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e626c7f2-db46-4757-bd05-eedfba7b5fc8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.211007 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e626c7f2-db46-4757-bd05-eedfba7b5fc8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.211625 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e626c7f2-db46-4757-bd05-eedfba7b5fc8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.215500 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckll7\" (UniqueName: \"kubernetes.io/projected/e626c7f2-db46-4757-bd05-eedfba7b5fc8-kube-api-access-ckll7\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.235404 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.357744 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.357989 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-r72t2" event={"ID":"95e0989c-1669-4e5b-99ed-bab3cadf50f6","Type":"ContainerStarted","Data":"c997ea828668f42433a52ee66666ed901cf9b89ae7a38b0fd66596165f1707cd"} Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.359801 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kj95b" event={"ID":"0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58","Type":"ContainerStarted","Data":"a50f155337283ab21a1d005002e84c4d682745b2837c6a6568b209c9d5e76c6d"} Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.719250 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 10:22:53 crc kubenswrapper[4904]: I0223 10:22:53.880320 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 10:22:53 crc kubenswrapper[4904]: W0223 10:22:53.927066 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode626c7f2_db46_4757_bd05_eedfba7b5fc8.slice/crio-9aee9a816991b9d4ee5f677bce2c8bfb8e90d4a4efb1b72bd7fe5b8d02f709be WatchSource:0}: Error finding container 9aee9a816991b9d4ee5f677bce2c8bfb8e90d4a4efb1b72bd7fe5b8d02f709be: Status 404 returned error can't find the container with id 9aee9a816991b9d4ee5f677bce2c8bfb8e90d4a4efb1b72bd7fe5b8d02f709be Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.010553 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.011862 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.019640 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.019887 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.021155 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.022083 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-f2cq8" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.023019 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.030220 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.123985 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e03468-b21e-4a61-afd3-08f3c10c102d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") " pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.124052 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c2e03468-b21e-4a61-afd3-08f3c10c102d-kolla-config\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") " pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.124092 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkb6b\" (UniqueName: \"kubernetes.io/projected/c2e03468-b21e-4a61-afd3-08f3c10c102d-kube-api-access-mkb6b\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") " pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.124126 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e03468-b21e-4a61-afd3-08f3c10c102d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") " pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.124145 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2e03468-b21e-4a61-afd3-08f3c10c102d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") " pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.124182 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c2e03468-b21e-4a61-afd3-08f3c10c102d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") " pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.124213 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") " pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.124236 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c2e03468-b21e-4a61-afd3-08f3c10c102d-config-data-default\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") " pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.226264 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e03468-b21e-4a61-afd3-08f3c10c102d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") " pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.228404 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2e03468-b21e-4a61-afd3-08f3c10c102d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") " pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.228455 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c2e03468-b21e-4a61-afd3-08f3c10c102d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") " pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.228490 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") " pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.228531 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c2e03468-b21e-4a61-afd3-08f3c10c102d-config-data-default\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") " pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.228649 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e03468-b21e-4a61-afd3-08f3c10c102d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") " pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.228777 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c2e03468-b21e-4a61-afd3-08f3c10c102d-kolla-config\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") " pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.228873 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkb6b\" (UniqueName: \"kubernetes.io/projected/c2e03468-b21e-4a61-afd3-08f3c10c102d-kube-api-access-mkb6b\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") " pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.230442 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c2e03468-b21e-4a61-afd3-08f3c10c102d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") " pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.230492 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c2e03468-b21e-4a61-afd3-08f3c10c102d-config-data-default\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") " pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.230602 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c2e03468-b21e-4a61-afd3-08f3c10c102d-kolla-config\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") " pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.230982 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.231164 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2e03468-b21e-4a61-afd3-08f3c10c102d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") " pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.237077 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e03468-b21e-4a61-afd3-08f3c10c102d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") " pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.264184 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e03468-b21e-4a61-afd3-08f3c10c102d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") " pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.285160 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkb6b\" (UniqueName: \"kubernetes.io/projected/c2e03468-b21e-4a61-afd3-08f3c10c102d-kube-api-access-mkb6b\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") " pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.291454 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"c2e03468-b21e-4a61-afd3-08f3c10c102d\") " pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.347254 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.398171 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"670153e4-0ac6-4ae8-ab14-08a3f2537c6c","Type":"ContainerStarted","Data":"ea83e583a3e807f9dab4b412e935d9446e21df2abf1b2d42e853ee8d8d71fef1"} Feb 23 10:22:54 crc kubenswrapper[4904]: I0223 10:22:54.403197 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e626c7f2-db46-4757-bd05-eedfba7b5fc8","Type":"ContainerStarted","Data":"9aee9a816991b9d4ee5f677bce2c8bfb8e90d4a4efb1b72bd7fe5b8d02f709be"} Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.028979 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.425668 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.428778 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.431635 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-ddl5q" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.432265 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.434915 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.437220 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.446934 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.530215 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.538797 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.543226 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-lmqmw" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.543458 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.543589 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.551917 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.578142 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wx24\" (UniqueName: \"kubernetes.io/projected/404e5fa5-dbcb-4e7e-ad52-96f65cb16015-kube-api-access-5wx24\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") " pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.578231 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") " pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.578265 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/404e5fa5-dbcb-4e7e-ad52-96f65cb16015-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") " pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.578286 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/404e5fa5-dbcb-4e7e-ad52-96f65cb16015-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") " pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.578318 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/404e5fa5-dbcb-4e7e-ad52-96f65cb16015-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") " pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.578341 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404e5fa5-dbcb-4e7e-ad52-96f65cb16015-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") " pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.578363 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/404e5fa5-dbcb-4e7e-ad52-96f65cb16015-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") " pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.578400 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/404e5fa5-dbcb-4e7e-ad52-96f65cb16015-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") " pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.680367 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/03ef630b-9a38-4867-9a0a-d16b2c1804a8-kolla-config\") pod \"memcached-0\" (UID: \"03ef630b-9a38-4867-9a0a-d16b2c1804a8\") " pod="openstack/memcached-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.680514 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wx24\" (UniqueName: \"kubernetes.io/projected/404e5fa5-dbcb-4e7e-ad52-96f65cb16015-kube-api-access-5wx24\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") " pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.680589 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") " pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.680653 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/404e5fa5-dbcb-4e7e-ad52-96f65cb16015-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") " pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.680683 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/404e5fa5-dbcb-4e7e-ad52-96f65cb16015-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") " pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.680790 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bnd4\" (UniqueName: \"kubernetes.io/projected/03ef630b-9a38-4867-9a0a-d16b2c1804a8-kube-api-access-7bnd4\") pod \"memcached-0\" (UID: \"03ef630b-9a38-4867-9a0a-d16b2c1804a8\") " pod="openstack/memcached-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.682173 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/404e5fa5-dbcb-4e7e-ad52-96f65cb16015-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") " pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.683330 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/404e5fa5-dbcb-4e7e-ad52-96f65cb16015-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") " pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.683546 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.683602 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03ef630b-9a38-4867-9a0a-d16b2c1804a8-config-data\") pod \"memcached-0\" (UID: \"03ef630b-9a38-4867-9a0a-d16b2c1804a8\") " pod="openstack/memcached-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.683636 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ef630b-9a38-4867-9a0a-d16b2c1804a8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"03ef630b-9a38-4867-9a0a-d16b2c1804a8\") " pod="openstack/memcached-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.683675 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/404e5fa5-dbcb-4e7e-ad52-96f65cb16015-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") " pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.683755 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404e5fa5-dbcb-4e7e-ad52-96f65cb16015-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") " pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.683775 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ef630b-9a38-4867-9a0a-d16b2c1804a8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"03ef630b-9a38-4867-9a0a-d16b2c1804a8\") " pod="openstack/memcached-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.683814 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/404e5fa5-dbcb-4e7e-ad52-96f65cb16015-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") " pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.683917 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/404e5fa5-dbcb-4e7e-ad52-96f65cb16015-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") " pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.684240 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/404e5fa5-dbcb-4e7e-ad52-96f65cb16015-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") " pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.684632 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/404e5fa5-dbcb-4e7e-ad52-96f65cb16015-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") " pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.693288 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/404e5fa5-dbcb-4e7e-ad52-96f65cb16015-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") " pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.700730 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wx24\" (UniqueName: \"kubernetes.io/projected/404e5fa5-dbcb-4e7e-ad52-96f65cb16015-kube-api-access-5wx24\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") " pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.700864 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404e5fa5-dbcb-4e7e-ad52-96f65cb16015-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") " pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.721164 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"404e5fa5-dbcb-4e7e-ad52-96f65cb16015\") " pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.778491 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.788764 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bnd4\" (UniqueName: \"kubernetes.io/projected/03ef630b-9a38-4867-9a0a-d16b2c1804a8-kube-api-access-7bnd4\") pod \"memcached-0\" (UID: \"03ef630b-9a38-4867-9a0a-d16b2c1804a8\") " pod="openstack/memcached-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.789026 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03ef630b-9a38-4867-9a0a-d16b2c1804a8-config-data\") pod \"memcached-0\" (UID: \"03ef630b-9a38-4867-9a0a-d16b2c1804a8\") " pod="openstack/memcached-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.789054 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ef630b-9a38-4867-9a0a-d16b2c1804a8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"03ef630b-9a38-4867-9a0a-d16b2c1804a8\") " pod="openstack/memcached-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.789098 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ef630b-9a38-4867-9a0a-d16b2c1804a8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"03ef630b-9a38-4867-9a0a-d16b2c1804a8\") " pod="openstack/memcached-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.789206 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/03ef630b-9a38-4867-9a0a-d16b2c1804a8-kolla-config\") pod \"memcached-0\" (UID: \"03ef630b-9a38-4867-9a0a-d16b2c1804a8\") " pod="openstack/memcached-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.790504 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03ef630b-9a38-4867-9a0a-d16b2c1804a8-config-data\") pod \"memcached-0\" (UID: \"03ef630b-9a38-4867-9a0a-d16b2c1804a8\") " pod="openstack/memcached-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.790511 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/03ef630b-9a38-4867-9a0a-d16b2c1804a8-kolla-config\") pod \"memcached-0\" (UID: \"03ef630b-9a38-4867-9a0a-d16b2c1804a8\") " pod="openstack/memcached-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.794650 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ef630b-9a38-4867-9a0a-d16b2c1804a8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"03ef630b-9a38-4867-9a0a-d16b2c1804a8\") " pod="openstack/memcached-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.795328 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ef630b-9a38-4867-9a0a-d16b2c1804a8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"03ef630b-9a38-4867-9a0a-d16b2c1804a8\") " pod="openstack/memcached-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.816934 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bnd4\" (UniqueName: \"kubernetes.io/projected/03ef630b-9a38-4867-9a0a-d16b2c1804a8-kube-api-access-7bnd4\") pod \"memcached-0\" (UID: \"03ef630b-9a38-4867-9a0a-d16b2c1804a8\") " pod="openstack/memcached-0" Feb 23 10:22:55 crc kubenswrapper[4904]: I0223 10:22:55.871738 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 23 10:22:58 crc kubenswrapper[4904]: I0223 10:22:58.162402 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 10:22:58 crc kubenswrapper[4904]: I0223 10:22:58.164251 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 10:22:58 crc kubenswrapper[4904]: I0223 10:22:58.168421 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-jdv58" Feb 23 10:22:58 crc kubenswrapper[4904]: I0223 10:22:58.168646 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnpjl\" (UniqueName: \"kubernetes.io/projected/a76369f2-3ab0-43c6-b601-9c2c0d5636c9-kube-api-access-mnpjl\") pod \"kube-state-metrics-0\" (UID: \"a76369f2-3ab0-43c6-b601-9c2c0d5636c9\") " pod="openstack/kube-state-metrics-0" Feb 23 10:22:58 crc kubenswrapper[4904]: I0223 10:22:58.204001 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 10:22:58 crc kubenswrapper[4904]: I0223 10:22:58.273493 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnpjl\" (UniqueName: \"kubernetes.io/projected/a76369f2-3ab0-43c6-b601-9c2c0d5636c9-kube-api-access-mnpjl\") pod \"kube-state-metrics-0\" (UID: \"a76369f2-3ab0-43c6-b601-9c2c0d5636c9\") " pod="openstack/kube-state-metrics-0" Feb 23 10:22:58 crc kubenswrapper[4904]: I0223 10:22:58.309493 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnpjl\" (UniqueName: \"kubernetes.io/projected/a76369f2-3ab0-43c6-b601-9c2c0d5636c9-kube-api-access-mnpjl\") pod \"kube-state-metrics-0\" (UID: \"a76369f2-3ab0-43c6-b601-9c2c0d5636c9\") " pod="openstack/kube-state-metrics-0" Feb 23 10:22:58 crc kubenswrapper[4904]: I0223 10:22:58.547556 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.549783 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.552490 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.552610 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.556368 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.556553 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.556663 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.556659 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.556808 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.556916 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-c4rwz" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.557412 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.567901 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.710331 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.710435 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.710521 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.710595 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.710632 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.710670 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-config\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.710750 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.710786 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.710854 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.710906 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjk8q\" (UniqueName: \"kubernetes.io/projected/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-kube-api-access-tjk8q\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.813454 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjk8q\" (UniqueName: \"kubernetes.io/projected/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-kube-api-access-tjk8q\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.814180 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.814243 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.816261 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.816319 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.819953 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.820026 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.820097 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-config\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.820236 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.820549 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.820629 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.821891 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.822819 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.828525 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.832855 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.834497 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-config\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.836099 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.839201 4904 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.839350 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/40819dc70d8445a50995e9a88cc270de788496e012ad6d3d513a831f13ec32aa/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.839931 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.855614 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjk8q\" (UniqueName: \"kubernetes.io/projected/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-kube-api-access-tjk8q\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:22:59 crc kubenswrapper[4904]: I0223 10:22:59.906913 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\") pod \"prometheus-metric-storage-0\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.203462 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.422170 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-q7rxf"] Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.423359 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q7rxf" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.426872 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.427059 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.429614 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-7nwd5" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.432635 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q7rxf"] Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.437728 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/717c8a73-d7f4-48d3-920d-f573f4f9dc9b-var-run\") pod \"ovn-controller-q7rxf\" (UID: \"717c8a73-d7f4-48d3-920d-f573f4f9dc9b\") " pod="openstack/ovn-controller-q7rxf" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.437765 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb2vj\" (UniqueName: \"kubernetes.io/projected/717c8a73-d7f4-48d3-920d-f573f4f9dc9b-kube-api-access-qb2vj\") pod \"ovn-controller-q7rxf\" (UID: \"717c8a73-d7f4-48d3-920d-f573f4f9dc9b\") " pod="openstack/ovn-controller-q7rxf" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.437791 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/717c8a73-d7f4-48d3-920d-f573f4f9dc9b-combined-ca-bundle\") pod \"ovn-controller-q7rxf\" (UID: \"717c8a73-d7f4-48d3-920d-f573f4f9dc9b\") " pod="openstack/ovn-controller-q7rxf" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.437856 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/717c8a73-d7f4-48d3-920d-f573f4f9dc9b-var-log-ovn\") pod \"ovn-controller-q7rxf\" (UID: \"717c8a73-d7f4-48d3-920d-f573f4f9dc9b\") " pod="openstack/ovn-controller-q7rxf" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.437877 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/717c8a73-d7f4-48d3-920d-f573f4f9dc9b-var-run-ovn\") pod \"ovn-controller-q7rxf\" (UID: \"717c8a73-d7f4-48d3-920d-f573f4f9dc9b\") " pod="openstack/ovn-controller-q7rxf" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.437903 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/717c8a73-d7f4-48d3-920d-f573f4f9dc9b-ovn-controller-tls-certs\") pod \"ovn-controller-q7rxf\" (UID: \"717c8a73-d7f4-48d3-920d-f573f4f9dc9b\") " pod="openstack/ovn-controller-q7rxf" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.437922 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/717c8a73-d7f4-48d3-920d-f573f4f9dc9b-scripts\") pod \"ovn-controller-q7rxf\" (UID: \"717c8a73-d7f4-48d3-920d-f573f4f9dc9b\") " pod="openstack/ovn-controller-q7rxf" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.531312 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-82gvj"] Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.533599 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-82gvj" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.540402 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7aa76c96-25cf-4196-a18f-9a33f9d9e195-scripts\") pod \"ovn-controller-ovs-82gvj\" (UID: \"7aa76c96-25cf-4196-a18f-9a33f9d9e195\") " pod="openstack/ovn-controller-ovs-82gvj" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.540485 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7aa76c96-25cf-4196-a18f-9a33f9d9e195-var-log\") pod \"ovn-controller-ovs-82gvj\" (UID: \"7aa76c96-25cf-4196-a18f-9a33f9d9e195\") " pod="openstack/ovn-controller-ovs-82gvj" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.540554 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7aa76c96-25cf-4196-a18f-9a33f9d9e195-etc-ovs\") pod \"ovn-controller-ovs-82gvj\" (UID: \"7aa76c96-25cf-4196-a18f-9a33f9d9e195\") " pod="openstack/ovn-controller-ovs-82gvj" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.540618 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/717c8a73-d7f4-48d3-920d-f573f4f9dc9b-var-log-ovn\") pod \"ovn-controller-q7rxf\" (UID: \"717c8a73-d7f4-48d3-920d-f573f4f9dc9b\") " pod="openstack/ovn-controller-q7rxf" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.540640 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7aa76c96-25cf-4196-a18f-9a33f9d9e195-var-lib\") pod \"ovn-controller-ovs-82gvj\" (UID: \"7aa76c96-25cf-4196-a18f-9a33f9d9e195\") " pod="openstack/ovn-controller-ovs-82gvj" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.540665 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/717c8a73-d7f4-48d3-920d-f573f4f9dc9b-var-run-ovn\") pod \"ovn-controller-q7rxf\" (UID: \"717c8a73-d7f4-48d3-920d-f573f4f9dc9b\") " pod="openstack/ovn-controller-q7rxf" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.540697 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/717c8a73-d7f4-48d3-920d-f573f4f9dc9b-ovn-controller-tls-certs\") pod \"ovn-controller-q7rxf\" (UID: \"717c8a73-d7f4-48d3-920d-f573f4f9dc9b\") " pod="openstack/ovn-controller-q7rxf" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.540742 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/717c8a73-d7f4-48d3-920d-f573f4f9dc9b-scripts\") pod \"ovn-controller-q7rxf\" (UID: \"717c8a73-d7f4-48d3-920d-f573f4f9dc9b\") " pod="openstack/ovn-controller-q7rxf" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.540759 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5r4c\" (UniqueName: \"kubernetes.io/projected/7aa76c96-25cf-4196-a18f-9a33f9d9e195-kube-api-access-c5r4c\") pod \"ovn-controller-ovs-82gvj\" (UID: \"7aa76c96-25cf-4196-a18f-9a33f9d9e195\") " pod="openstack/ovn-controller-ovs-82gvj" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.540824 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/717c8a73-d7f4-48d3-920d-f573f4f9dc9b-var-run\") pod \"ovn-controller-q7rxf\" (UID: \"717c8a73-d7f4-48d3-920d-f573f4f9dc9b\") " pod="openstack/ovn-controller-q7rxf" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.540839 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb2vj\" (UniqueName: \"kubernetes.io/projected/717c8a73-d7f4-48d3-920d-f573f4f9dc9b-kube-api-access-qb2vj\") pod \"ovn-controller-q7rxf\" (UID: \"717c8a73-d7f4-48d3-920d-f573f4f9dc9b\") " pod="openstack/ovn-controller-q7rxf" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.540864 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/717c8a73-d7f4-48d3-920d-f573f4f9dc9b-combined-ca-bundle\") pod \"ovn-controller-q7rxf\" (UID: \"717c8a73-d7f4-48d3-920d-f573f4f9dc9b\") " pod="openstack/ovn-controller-q7rxf" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.540988 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7aa76c96-25cf-4196-a18f-9a33f9d9e195-var-run\") pod \"ovn-controller-ovs-82gvj\" (UID: \"7aa76c96-25cf-4196-a18f-9a33f9d9e195\") " pod="openstack/ovn-controller-ovs-82gvj" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.541660 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/717c8a73-d7f4-48d3-920d-f573f4f9dc9b-var-log-ovn\") pod \"ovn-controller-q7rxf\" (UID: \"717c8a73-d7f4-48d3-920d-f573f4f9dc9b\") " pod="openstack/ovn-controller-q7rxf" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.541872 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/717c8a73-d7f4-48d3-920d-f573f4f9dc9b-var-run-ovn\") pod \"ovn-controller-q7rxf\" (UID: \"717c8a73-d7f4-48d3-920d-f573f4f9dc9b\") " pod="openstack/ovn-controller-q7rxf" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.542037 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/717c8a73-d7f4-48d3-920d-f573f4f9dc9b-var-run\") pod \"ovn-controller-q7rxf\" (UID: \"717c8a73-d7f4-48d3-920d-f573f4f9dc9b\") " pod="openstack/ovn-controller-q7rxf" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.563802 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/717c8a73-d7f4-48d3-920d-f573f4f9dc9b-ovn-controller-tls-certs\") pod \"ovn-controller-q7rxf\" (UID: \"717c8a73-d7f4-48d3-920d-f573f4f9dc9b\") " pod="openstack/ovn-controller-q7rxf" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.569947 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/717c8a73-d7f4-48d3-920d-f573f4f9dc9b-scripts\") pod \"ovn-controller-q7rxf\" (UID: \"717c8a73-d7f4-48d3-920d-f573f4f9dc9b\") " pod="openstack/ovn-controller-q7rxf" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.578171 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-82gvj"] Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.604775 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb2vj\" (UniqueName: \"kubernetes.io/projected/717c8a73-d7f4-48d3-920d-f573f4f9dc9b-kube-api-access-qb2vj\") pod \"ovn-controller-q7rxf\" (UID: \"717c8a73-d7f4-48d3-920d-f573f4f9dc9b\") " pod="openstack/ovn-controller-q7rxf" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.616255 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/717c8a73-d7f4-48d3-920d-f573f4f9dc9b-combined-ca-bundle\") pod \"ovn-controller-q7rxf\" (UID: \"717c8a73-d7f4-48d3-920d-f573f4f9dc9b\") " pod="openstack/ovn-controller-q7rxf" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.643559 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7aa76c96-25cf-4196-a18f-9a33f9d9e195-var-run\") pod \"ovn-controller-ovs-82gvj\" (UID: \"7aa76c96-25cf-4196-a18f-9a33f9d9e195\") " pod="openstack/ovn-controller-ovs-82gvj" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.643800 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7aa76c96-25cf-4196-a18f-9a33f9d9e195-var-run\") pod \"ovn-controller-ovs-82gvj\" (UID: \"7aa76c96-25cf-4196-a18f-9a33f9d9e195\") " pod="openstack/ovn-controller-ovs-82gvj" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.643911 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7aa76c96-25cf-4196-a18f-9a33f9d9e195-scripts\") pod \"ovn-controller-ovs-82gvj\" (UID: \"7aa76c96-25cf-4196-a18f-9a33f9d9e195\") " pod="openstack/ovn-controller-ovs-82gvj" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.644003 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7aa76c96-25cf-4196-a18f-9a33f9d9e195-var-log\") pod \"ovn-controller-ovs-82gvj\" (UID: \"7aa76c96-25cf-4196-a18f-9a33f9d9e195\") " pod="openstack/ovn-controller-ovs-82gvj" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.644065 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7aa76c96-25cf-4196-a18f-9a33f9d9e195-etc-ovs\") pod \"ovn-controller-ovs-82gvj\" (UID: \"7aa76c96-25cf-4196-a18f-9a33f9d9e195\") " pod="openstack/ovn-controller-ovs-82gvj" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.644112 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7aa76c96-25cf-4196-a18f-9a33f9d9e195-var-lib\") pod \"ovn-controller-ovs-82gvj\" (UID: \"7aa76c96-25cf-4196-a18f-9a33f9d9e195\") " pod="openstack/ovn-controller-ovs-82gvj" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.644168 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5r4c\" (UniqueName: \"kubernetes.io/projected/7aa76c96-25cf-4196-a18f-9a33f9d9e195-kube-api-access-c5r4c\") pod \"ovn-controller-ovs-82gvj\" (UID: \"7aa76c96-25cf-4196-a18f-9a33f9d9e195\") " pod="openstack/ovn-controller-ovs-82gvj" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.644883 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7aa76c96-25cf-4196-a18f-9a33f9d9e195-var-lib\") pod \"ovn-controller-ovs-82gvj\" (UID: \"7aa76c96-25cf-4196-a18f-9a33f9d9e195\") " pod="openstack/ovn-controller-ovs-82gvj" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.645330 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7aa76c96-25cf-4196-a18f-9a33f9d9e195-etc-ovs\") pod \"ovn-controller-ovs-82gvj\" (UID: \"7aa76c96-25cf-4196-a18f-9a33f9d9e195\") " pod="openstack/ovn-controller-ovs-82gvj" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.645485 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7aa76c96-25cf-4196-a18f-9a33f9d9e195-var-log\") pod \"ovn-controller-ovs-82gvj\" (UID: \"7aa76c96-25cf-4196-a18f-9a33f9d9e195\") " pod="openstack/ovn-controller-ovs-82gvj" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.647307 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7aa76c96-25cf-4196-a18f-9a33f9d9e195-scripts\") pod \"ovn-controller-ovs-82gvj\" (UID: \"7aa76c96-25cf-4196-a18f-9a33f9d9e195\") " pod="openstack/ovn-controller-ovs-82gvj" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.676650 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5r4c\" (UniqueName: \"kubernetes.io/projected/7aa76c96-25cf-4196-a18f-9a33f9d9e195-kube-api-access-c5r4c\") pod \"ovn-controller-ovs-82gvj\" (UID: \"7aa76c96-25cf-4196-a18f-9a33f9d9e195\") " pod="openstack/ovn-controller-ovs-82gvj" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.780109 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q7rxf" Feb 23 10:23:00 crc kubenswrapper[4904]: I0223 10:23:00.894688 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-82gvj" Feb 23 10:23:01 crc kubenswrapper[4904]: I0223 10:23:01.894317 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 23 10:23:01 crc kubenswrapper[4904]: I0223 10:23:01.896408 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:01 crc kubenswrapper[4904]: I0223 10:23:01.901626 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 23 10:23:01 crc kubenswrapper[4904]: I0223 10:23:01.901933 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 23 10:23:01 crc kubenswrapper[4904]: I0223 10:23:01.902079 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-tqf6b" Feb 23 10:23:01 crc kubenswrapper[4904]: I0223 10:23:01.902507 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 23 10:23:01 crc kubenswrapper[4904]: I0223 10:23:01.902760 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 23 10:23:01 crc kubenswrapper[4904]: I0223 10:23:01.909343 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 23 10:23:01 crc kubenswrapper[4904]: I0223 10:23:01.990784 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") " pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:01 crc kubenswrapper[4904]: I0223 10:23:01.990841 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c16f60f3-f488-4d2c-858e-dee1662f8f4b-config\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") " pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:01 crc kubenswrapper[4904]: I0223 10:23:01.990894 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c16f60f3-f488-4d2c-858e-dee1662f8f4b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") " pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:01 crc kubenswrapper[4904]: I0223 10:23:01.990923 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c16f60f3-f488-4d2c-858e-dee1662f8f4b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") " pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:01 crc kubenswrapper[4904]: I0223 10:23:01.990954 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd6w4\" (UniqueName: \"kubernetes.io/projected/c16f60f3-f488-4d2c-858e-dee1662f8f4b-kube-api-access-cd6w4\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") " pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:01 crc kubenswrapper[4904]: I0223 10:23:01.991133 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c16f60f3-f488-4d2c-858e-dee1662f8f4b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") " pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:01 crc kubenswrapper[4904]: I0223 10:23:01.991277 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16f60f3-f488-4d2c-858e-dee1662f8f4b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") " pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:01 crc kubenswrapper[4904]: I0223 10:23:01.991313 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c16f60f3-f488-4d2c-858e-dee1662f8f4b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") " pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:02 crc kubenswrapper[4904]: I0223 10:23:02.093093 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") " pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:02 crc kubenswrapper[4904]: I0223 10:23:02.093163 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c16f60f3-f488-4d2c-858e-dee1662f8f4b-config\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") " pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:02 crc kubenswrapper[4904]: I0223 10:23:02.093210 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c16f60f3-f488-4d2c-858e-dee1662f8f4b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") " pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:02 crc kubenswrapper[4904]: I0223 10:23:02.093234 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c16f60f3-f488-4d2c-858e-dee1662f8f4b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") " pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:02 crc kubenswrapper[4904]: I0223 10:23:02.093259 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd6w4\" (UniqueName: \"kubernetes.io/projected/c16f60f3-f488-4d2c-858e-dee1662f8f4b-kube-api-access-cd6w4\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") " pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:02 crc kubenswrapper[4904]: I0223 10:23:02.093817 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:02 crc kubenswrapper[4904]: I0223 10:23:02.093980 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c16f60f3-f488-4d2c-858e-dee1662f8f4b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") " pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:02 crc kubenswrapper[4904]: I0223 10:23:02.094580 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c16f60f3-f488-4d2c-858e-dee1662f8f4b-config\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") " pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:02 crc kubenswrapper[4904]: I0223 10:23:02.094775 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c16f60f3-f488-4d2c-858e-dee1662f8f4b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") " pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:02 crc kubenswrapper[4904]: I0223 10:23:02.095493 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c16f60f3-f488-4d2c-858e-dee1662f8f4b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") " pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:02 crc kubenswrapper[4904]: I0223 10:23:02.097323 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16f60f3-f488-4d2c-858e-dee1662f8f4b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") " pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:02 crc kubenswrapper[4904]: I0223 10:23:02.099014 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c16f60f3-f488-4d2c-858e-dee1662f8f4b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") " pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:02 crc kubenswrapper[4904]: I0223 10:23:02.101522 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c16f60f3-f488-4d2c-858e-dee1662f8f4b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") " pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:02 crc kubenswrapper[4904]: I0223 10:23:02.103595 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c16f60f3-f488-4d2c-858e-dee1662f8f4b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") " pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:02 crc kubenswrapper[4904]: I0223 10:23:02.107727 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16f60f3-f488-4d2c-858e-dee1662f8f4b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") " pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:02 crc kubenswrapper[4904]: I0223 10:23:02.110078 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd6w4\" (UniqueName: \"kubernetes.io/projected/c16f60f3-f488-4d2c-858e-dee1662f8f4b-kube-api-access-cd6w4\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") " pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:02 crc kubenswrapper[4904]: I0223 10:23:02.125882 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c16f60f3-f488-4d2c-858e-dee1662f8f4b\") " pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:02 crc kubenswrapper[4904]: I0223 10:23:02.216695 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:03 crc kubenswrapper[4904]: I0223 10:23:03.526446 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c2e03468-b21e-4a61-afd3-08f3c10c102d","Type":"ContainerStarted","Data":"3077fce32119e7c15be37a3133f54e3086e66fcad31dab7540c89f9941cd3808"} Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.284200 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.286019 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.289920 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.290180 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-kjttz" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.291069 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.292658 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.302210 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.374606 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4601812-5c00-4a35-adc9-2003ca6001b2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") " pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.374827 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4601812-5c00-4a35-adc9-2003ca6001b2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") " pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.374918 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4601812-5c00-4a35-adc9-2003ca6001b2-config\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") " pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.375051 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4601812-5c00-4a35-adc9-2003ca6001b2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") " pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.376010 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sbtd\" (UniqueName: \"kubernetes.io/projected/e4601812-5c00-4a35-adc9-2003ca6001b2-kube-api-access-4sbtd\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") " pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.376087 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4601812-5c00-4a35-adc9-2003ca6001b2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") " pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.376149 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e4601812-5c00-4a35-adc9-2003ca6001b2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") " pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.376340 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") " pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.478203 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") " pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.478887 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4601812-5c00-4a35-adc9-2003ca6001b2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") " pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.478692 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.479202 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4601812-5c00-4a35-adc9-2003ca6001b2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") " pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.479286 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4601812-5c00-4a35-adc9-2003ca6001b2-config\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") " pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.479426 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4601812-5c00-4a35-adc9-2003ca6001b2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") " pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.480085 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4601812-5c00-4a35-adc9-2003ca6001b2-config\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") " pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.480238 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sbtd\" (UniqueName: \"kubernetes.io/projected/e4601812-5c00-4a35-adc9-2003ca6001b2-kube-api-access-4sbtd\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") " pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.480316 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4601812-5c00-4a35-adc9-2003ca6001b2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") " pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.480379 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e4601812-5c00-4a35-adc9-2003ca6001b2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") " pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.480407 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4601812-5c00-4a35-adc9-2003ca6001b2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") " pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.480751 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e4601812-5c00-4a35-adc9-2003ca6001b2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") " pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.500427 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4601812-5c00-4a35-adc9-2003ca6001b2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") " pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.502583 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sbtd\" (UniqueName: \"kubernetes.io/projected/e4601812-5c00-4a35-adc9-2003ca6001b2-kube-api-access-4sbtd\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") " pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.504822 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4601812-5c00-4a35-adc9-2003ca6001b2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") " pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.508213 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4601812-5c00-4a35-adc9-2003ca6001b2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") " pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.510861 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e4601812-5c00-4a35-adc9-2003ca6001b2\") " pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:05 crc kubenswrapper[4904]: I0223 10:23:05.612444 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:06 crc kubenswrapper[4904]: I0223 10:23:06.903982 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-82gvj"] Feb 23 10:23:13 crc kubenswrapper[4904]: E0223 10:23:13.062017 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 23 10:23:13 crc kubenswrapper[4904]: E0223 10:23:13.063039 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bb7bl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(670153e4-0ac6-4ae8-ab14-08a3f2537c6c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 10:23:13 crc kubenswrapper[4904]: E0223 10:23:13.064192 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="670153e4-0ac6-4ae8-ab14-08a3f2537c6c" Feb 23 10:23:13 crc kubenswrapper[4904]: E0223 10:23:13.070648 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 23 10:23:13 crc kubenswrapper[4904]: E0223 10:23:13.070879 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ckll7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(e626c7f2-db46-4757-bd05-eedfba7b5fc8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 10:23:13 crc kubenswrapper[4904]: E0223 10:23:13.072054 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="e626c7f2-db46-4757-bd05-eedfba7b5fc8" Feb 23 10:23:13 crc kubenswrapper[4904]: E0223 10:23:13.639630 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="670153e4-0ac6-4ae8-ab14-08a3f2537c6c" Feb 23 10:23:13 crc kubenswrapper[4904]: E0223 10:23:13.640348 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="e626c7f2-db46-4757-bd05-eedfba7b5fc8" Feb 23 10:23:17 crc kubenswrapper[4904]: I0223 10:23:17.674205 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-82gvj" event={"ID":"7aa76c96-25cf-4196-a18f-9a33f9d9e195","Type":"ContainerStarted","Data":"f93341084ce8409e0bd00a39a0185b01d984afb7eb16bd3699d0e74ff9776a2e"} Feb 23 10:23:18 crc kubenswrapper[4904]: E0223 10:23:18.494910 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 23 10:23:18 crc kubenswrapper[4904]: E0223 10:23:18.495111 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2xvzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-kj95b_openstack(0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 10:23:18 crc kubenswrapper[4904]: E0223 10:23:18.496360 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-kj95b" podUID="0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58" Feb 23 10:23:18 crc kubenswrapper[4904]: E0223 10:23:18.687699 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-kj95b" podUID="0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58" Feb 23 10:23:20 crc kubenswrapper[4904]: E0223 10:23:20.620359 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 23 10:23:20 crc kubenswrapper[4904]: E0223 10:23:20.621289 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mkb6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(c2e03468-b21e-4a61-afd3-08f3c10c102d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 10:23:20 crc kubenswrapper[4904]: E0223 10:23:20.622527 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="c2e03468-b21e-4a61-afd3-08f3c10c102d" Feb 23 10:23:20 crc kubenswrapper[4904]: E0223 10:23:20.706727 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 23 10:23:20 crc kubenswrapper[4904]: E0223 10:23:20.706960 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nc94b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-r72t2_openstack(95e0989c-1669-4e5b-99ed-bab3cadf50f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 10:23:20 crc kubenswrapper[4904]: E0223 10:23:20.709360 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="c2e03468-b21e-4a61-afd3-08f3c10c102d" Feb 23 10:23:20 crc kubenswrapper[4904]: E0223 10:23:20.710177 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-r72t2" podUID="95e0989c-1669-4e5b-99ed-bab3cadf50f6" Feb 23 10:23:20 crc kubenswrapper[4904]: E0223 10:23:20.718103 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 23 10:23:20 crc kubenswrapper[4904]: E0223 10:23:20.718397 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zpsnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-4sjr7_openstack(01ececcd-713f-4c35-bcdc-f186f8c1b081): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 10:23:20 crc kubenswrapper[4904]: E0223 10:23:20.718481 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 23 10:23:20 crc kubenswrapper[4904]: E0223 10:23:20.718563 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-br9pq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-f8vcl_openstack(379438a2-08e2-4bda-a112-372068f4c001): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 10:23:20 crc kubenswrapper[4904]: E0223 10:23:20.722644 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-f8vcl" podUID="379438a2-08e2-4bda-a112-372068f4c001" Feb 23 10:23:20 crc kubenswrapper[4904]: E0223 10:23:20.722755 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-4sjr7" podUID="01ececcd-713f-4c35-bcdc-f186f8c1b081" Feb 23 10:23:21 crc kubenswrapper[4904]: I0223 10:23:21.237496 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q7rxf"] Feb 23 10:23:21 crc kubenswrapper[4904]: W0223 10:23:21.250879 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod717c8a73_d7f4_48d3_920d_f573f4f9dc9b.slice/crio-0ff67f1da60f502d931778b9bcdd889579468cd4118c551468b61b2ba8790afe WatchSource:0}: Error finding container 0ff67f1da60f502d931778b9bcdd889579468cd4118c551468b61b2ba8790afe: Status 404 returned error can't find the container with id 0ff67f1da60f502d931778b9bcdd889579468cd4118c551468b61b2ba8790afe Feb 23 10:23:21 crc kubenswrapper[4904]: I0223 10:23:21.392726 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 23 10:23:21 crc kubenswrapper[4904]: I0223 10:23:21.427199 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 10:23:21 crc kubenswrapper[4904]: W0223 10:23:21.431637 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda76369f2_3ab0_43c6_b601_9c2c0d5636c9.slice/crio-8251335abfa1ee8d633543f70e56f10b50cfdaa670ce35ffeb5e2e688568cf6b WatchSource:0}: Error finding container 8251335abfa1ee8d633543f70e56f10b50cfdaa670ce35ffeb5e2e688568cf6b: Status 404 returned error can't find the container with id 8251335abfa1ee8d633543f70e56f10b50cfdaa670ce35ffeb5e2e688568cf6b Feb 23 10:23:21 crc kubenswrapper[4904]: I0223 10:23:21.643582 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 23 10:23:21 crc kubenswrapper[4904]: I0223 10:23:21.657268 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 10:23:21 crc kubenswrapper[4904]: I0223 10:23:21.672680 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 23 10:23:21 crc kubenswrapper[4904]: I0223 10:23:21.738180 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dd62fb2b-c564-4004-a886-f2d4bd1d3eda","Type":"ContainerStarted","Data":"ab1381383d496ec9fd8c64802212f400388d5cd3a235fdd47cdceba228c52577"} Feb 23 10:23:21 crc kubenswrapper[4904]: I0223 10:23:21.759069 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q7rxf" event={"ID":"717c8a73-d7f4-48d3-920d-f573f4f9dc9b","Type":"ContainerStarted","Data":"0ff67f1da60f502d931778b9bcdd889579468cd4118c551468b61b2ba8790afe"} Feb 23 10:23:21 crc kubenswrapper[4904]: I0223 10:23:21.766895 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"03ef630b-9a38-4867-9a0a-d16b2c1804a8","Type":"ContainerStarted","Data":"9cea9840ce35faa326469870c5642a2479295ae3ba90d8d6064ecb885dcf00e0"} Feb 23 10:23:21 crc kubenswrapper[4904]: I0223 10:23:21.771594 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a76369f2-3ab0-43c6-b601-9c2c0d5636c9","Type":"ContainerStarted","Data":"8251335abfa1ee8d633543f70e56f10b50cfdaa670ce35ffeb5e2e688568cf6b"} Feb 23 10:23:21 crc kubenswrapper[4904]: I0223 10:23:21.772950 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"404e5fa5-dbcb-4e7e-ad52-96f65cb16015","Type":"ContainerStarted","Data":"3b3ca4ffe72798bdf6d2cfa6d4e384c3c9ee1b09db7c2c160a92b36b1a204818"} Feb 23 10:23:21 crc kubenswrapper[4904]: I0223 10:23:21.775264 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c16f60f3-f488-4d2c-858e-dee1662f8f4b","Type":"ContainerStarted","Data":"0f3ca3b2bccfd7d76d5729b10a25b440c7485c451dc516248c555a08608ac0db"} Feb 23 10:23:21 crc kubenswrapper[4904]: E0223 10:23:21.778920 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-r72t2" podUID="95e0989c-1669-4e5b-99ed-bab3cadf50f6" Feb 23 10:23:21 crc kubenswrapper[4904]: I0223 10:23:21.878277 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 23 10:23:21 crc kubenswrapper[4904]: W0223 10:23:21.897266 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4601812_5c00_4a35_adc9_2003ca6001b2.slice/crio-7ad3b6fb211655a92b7d0f05d48a00ee874c24e3f5def053f43b060934133a4b WatchSource:0}: Error finding container 7ad3b6fb211655a92b7d0f05d48a00ee874c24e3f5def053f43b060934133a4b: Status 404 returned error can't find the container with id 7ad3b6fb211655a92b7d0f05d48a00ee874c24e3f5def053f43b060934133a4b Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.403921 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-f8vcl" Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.417448 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-4sjr7" Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.586999 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01ececcd-713f-4c35-bcdc-f186f8c1b081-dns-svc\") pod \"01ececcd-713f-4c35-bcdc-f186f8c1b081\" (UID: \"01ececcd-713f-4c35-bcdc-f186f8c1b081\") " Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.587073 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/379438a2-08e2-4bda-a112-372068f4c001-config\") pod \"379438a2-08e2-4bda-a112-372068f4c001\" (UID: \"379438a2-08e2-4bda-a112-372068f4c001\") " Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.587184 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ececcd-713f-4c35-bcdc-f186f8c1b081-config\") pod \"01ececcd-713f-4c35-bcdc-f186f8c1b081\" (UID: \"01ececcd-713f-4c35-bcdc-f186f8c1b081\") " Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.587385 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br9pq\" (UniqueName: \"kubernetes.io/projected/379438a2-08e2-4bda-a112-372068f4c001-kube-api-access-br9pq\") pod \"379438a2-08e2-4bda-a112-372068f4c001\" (UID: \"379438a2-08e2-4bda-a112-372068f4c001\") " Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.587445 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpsnh\" (UniqueName: \"kubernetes.io/projected/01ececcd-713f-4c35-bcdc-f186f8c1b081-kube-api-access-zpsnh\") pod \"01ececcd-713f-4c35-bcdc-f186f8c1b081\" (UID: \"01ececcd-713f-4c35-bcdc-f186f8c1b081\") " Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.587842 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ececcd-713f-4c35-bcdc-f186f8c1b081-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "01ececcd-713f-4c35-bcdc-f186f8c1b081" (UID: "01ececcd-713f-4c35-bcdc-f186f8c1b081"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.587857 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/379438a2-08e2-4bda-a112-372068f4c001-config" (OuterVolumeSpecName: "config") pod "379438a2-08e2-4bda-a112-372068f4c001" (UID: "379438a2-08e2-4bda-a112-372068f4c001"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.588035 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ececcd-713f-4c35-bcdc-f186f8c1b081-config" (OuterVolumeSpecName: "config") pod "01ececcd-713f-4c35-bcdc-f186f8c1b081" (UID: "01ececcd-713f-4c35-bcdc-f186f8c1b081"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.598255 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ececcd-713f-4c35-bcdc-f186f8c1b081-kube-api-access-zpsnh" (OuterVolumeSpecName: "kube-api-access-zpsnh") pod "01ececcd-713f-4c35-bcdc-f186f8c1b081" (UID: "01ececcd-713f-4c35-bcdc-f186f8c1b081"). InnerVolumeSpecName "kube-api-access-zpsnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.599026 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/379438a2-08e2-4bda-a112-372068f4c001-kube-api-access-br9pq" (OuterVolumeSpecName: "kube-api-access-br9pq") pod "379438a2-08e2-4bda-a112-372068f4c001" (UID: "379438a2-08e2-4bda-a112-372068f4c001"). InnerVolumeSpecName "kube-api-access-br9pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.690185 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ececcd-713f-4c35-bcdc-f186f8c1b081-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.690229 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-br9pq\" (UniqueName: \"kubernetes.io/projected/379438a2-08e2-4bda-a112-372068f4c001-kube-api-access-br9pq\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.690248 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpsnh\" (UniqueName: \"kubernetes.io/projected/01ececcd-713f-4c35-bcdc-f186f8c1b081-kube-api-access-zpsnh\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.690260 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01ececcd-713f-4c35-bcdc-f186f8c1b081-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.690272 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/379438a2-08e2-4bda-a112-372068f4c001-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.790399 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-4sjr7" event={"ID":"01ececcd-713f-4c35-bcdc-f186f8c1b081","Type":"ContainerDied","Data":"1dd275f2c6931185b715b7a193b55ee204267d950f7d6cf53acd040a6bac390b"} Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.790549 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-4sjr7" Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.810683 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-f8vcl" event={"ID":"379438a2-08e2-4bda-a112-372068f4c001","Type":"ContainerDied","Data":"e5459c356688a0a2538ac49e7b7aa09a6a61f509f4564b5636b29601b20b9228"} Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.810872 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-f8vcl" Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.814606 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e4601812-5c00-4a35-adc9-2003ca6001b2","Type":"ContainerStarted","Data":"7ad3b6fb211655a92b7d0f05d48a00ee874c24e3f5def053f43b060934133a4b"} Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.864945 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4sjr7"] Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.891879 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4sjr7"] Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.932672 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f8vcl"] Feb 23 10:23:22 crc kubenswrapper[4904]: I0223 10:23:22.943466 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f8vcl"] Feb 23 10:23:23 crc kubenswrapper[4904]: I0223 10:23:23.266078 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ececcd-713f-4c35-bcdc-f186f8c1b081" path="/var/lib/kubelet/pods/01ececcd-713f-4c35-bcdc-f186f8c1b081/volumes" Feb 23 10:23:23 crc kubenswrapper[4904]: I0223 10:23:23.266918 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="379438a2-08e2-4bda-a112-372068f4c001" path="/var/lib/kubelet/pods/379438a2-08e2-4bda-a112-372068f4c001/volumes" Feb 23 10:23:28 crc kubenswrapper[4904]: I0223 10:23:28.878229 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c16f60f3-f488-4d2c-858e-dee1662f8f4b","Type":"ContainerStarted","Data":"ddc1bb988fe150351a2519317077909e44b2020b0d6f699cae92828960ee9a43"} Feb 23 10:23:28 crc kubenswrapper[4904]: I0223 10:23:28.880029 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q7rxf" event={"ID":"717c8a73-d7f4-48d3-920d-f573f4f9dc9b","Type":"ContainerStarted","Data":"c915386f929e501af14ec1e48cf7dcb48d8959870fe74d6806f3115be4ecf56c"} Feb 23 10:23:28 crc kubenswrapper[4904]: I0223 10:23:28.880114 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-q7rxf" Feb 23 10:23:28 crc kubenswrapper[4904]: I0223 10:23:28.881387 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e4601812-5c00-4a35-adc9-2003ca6001b2","Type":"ContainerStarted","Data":"8aadc4e8de880a28db23ecb0bf4853769370e3ad352882323be54d5de812ec8c"} Feb 23 10:23:28 crc kubenswrapper[4904]: I0223 10:23:28.882636 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a76369f2-3ab0-43c6-b601-9c2c0d5636c9","Type":"ContainerStarted","Data":"704260c70f4076d1fec4f980f7f30afab803ee1bb524a3f9e9eac5ade1b470c1"} Feb 23 10:23:28 crc kubenswrapper[4904]: I0223 10:23:28.882801 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 23 10:23:28 crc kubenswrapper[4904]: I0223 10:23:28.884147 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"03ef630b-9a38-4867-9a0a-d16b2c1804a8","Type":"ContainerStarted","Data":"e2ef1cab75a4d03793266e1c9b9be4e85822c0b7ae41cade597c151e81cff42d"} Feb 23 10:23:28 crc kubenswrapper[4904]: I0223 10:23:28.884342 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 23 10:23:28 crc kubenswrapper[4904]: I0223 10:23:28.885313 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"404e5fa5-dbcb-4e7e-ad52-96f65cb16015","Type":"ContainerStarted","Data":"afafc19a277bf8c2ad4be43545bc5042a1b3fb97cfbd65bf24f27e9c6e018d8d"} Feb 23 10:23:28 crc kubenswrapper[4904]: I0223 10:23:28.886682 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-82gvj" event={"ID":"7aa76c96-25cf-4196-a18f-9a33f9d9e195","Type":"ContainerStarted","Data":"d67a30aebf6bb72d7ad40e1b356028fd3b7bdfc78d76f147246dc2c8ca85e1c2"} Feb 23 10:23:28 crc kubenswrapper[4904]: I0223 10:23:28.912358 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-q7rxf" podStartSLOduration=23.016673173 podStartE2EDuration="28.912322184s" podCreationTimestamp="2026-02-23 10:23:00 +0000 UTC" firstStartedPulling="2026-02-23 10:23:21.257353959 +0000 UTC m=+1034.677727512" lastFinishedPulling="2026-02-23 10:23:27.15300301 +0000 UTC m=+1040.573376523" observedRunningTime="2026-02-23 10:23:28.903737791 +0000 UTC m=+1042.324111314" watchObservedRunningTime="2026-02-23 10:23:28.912322184 +0000 UTC m=+1042.332695697" Feb 23 10:23:28 crc kubenswrapper[4904]: I0223 10:23:28.971116 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=28.079509736 podStartE2EDuration="33.971089782s" podCreationTimestamp="2026-02-23 10:22:55 +0000 UTC" firstStartedPulling="2026-02-23 10:23:21.660319764 +0000 UTC m=+1035.080693277" lastFinishedPulling="2026-02-23 10:23:27.55189981 +0000 UTC m=+1040.972273323" observedRunningTime="2026-02-23 10:23:28.938056935 +0000 UTC m=+1042.358430458" watchObservedRunningTime="2026-02-23 10:23:28.971089782 +0000 UTC m=+1042.391463295" Feb 23 10:23:28 crc kubenswrapper[4904]: I0223 10:23:28.998637 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=24.366165123 podStartE2EDuration="30.998603233s" podCreationTimestamp="2026-02-23 10:22:58 +0000 UTC" firstStartedPulling="2026-02-23 10:23:21.43464843 +0000 UTC m=+1034.855021943" lastFinishedPulling="2026-02-23 10:23:28.06708654 +0000 UTC m=+1041.487460053" observedRunningTime="2026-02-23 10:23:28.99039349 +0000 UTC m=+1042.410767003" watchObservedRunningTime="2026-02-23 10:23:28.998603233 +0000 UTC m=+1042.418976746" Feb 23 10:23:29 crc kubenswrapper[4904]: I0223 10:23:29.896219 4904 generic.go:334] "Generic (PLEG): container finished" podID="7aa76c96-25cf-4196-a18f-9a33f9d9e195" containerID="d67a30aebf6bb72d7ad40e1b356028fd3b7bdfc78d76f147246dc2c8ca85e1c2" exitCode=0 Feb 23 10:23:29 crc kubenswrapper[4904]: I0223 10:23:29.896343 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-82gvj" event={"ID":"7aa76c96-25cf-4196-a18f-9a33f9d9e195","Type":"ContainerDied","Data":"d67a30aebf6bb72d7ad40e1b356028fd3b7bdfc78d76f147246dc2c8ca85e1c2"} Feb 23 10:23:29 crc kubenswrapper[4904]: I0223 10:23:29.900490 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e626c7f2-db46-4757-bd05-eedfba7b5fc8","Type":"ContainerStarted","Data":"465ea43cc0d67644504723e43babe568f422077a27d1ea2c430281933c762954"} Feb 23 10:23:30 crc kubenswrapper[4904]: I0223 10:23:30.920983 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"670153e4-0ac6-4ae8-ab14-08a3f2537c6c","Type":"ContainerStarted","Data":"157af569ff1c401c88a51da75639bbf328ab074d17dda426a1cb24639506a652"} Feb 23 10:23:30 crc kubenswrapper[4904]: I0223 10:23:30.923259 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dd62fb2b-c564-4004-a886-f2d4bd1d3eda","Type":"ContainerStarted","Data":"61c1816af19d2a006e53c2585a487f27ce25eb18f6db49d8b009e2147acec83d"} Feb 23 10:23:31 crc kubenswrapper[4904]: I0223 10:23:31.936553 4904 generic.go:334] "Generic (PLEG): container finished" podID="0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58" containerID="e8b5e09e7d237a7be6a8ceb39a427560b8336f1f42ee43c8575da764bfd692dc" exitCode=0 Feb 23 10:23:31 crc kubenswrapper[4904]: I0223 10:23:31.936620 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kj95b" event={"ID":"0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58","Type":"ContainerDied","Data":"e8b5e09e7d237a7be6a8ceb39a427560b8336f1f42ee43c8575da764bfd692dc"} Feb 23 10:23:31 crc kubenswrapper[4904]: I0223 10:23:31.943908 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e4601812-5c00-4a35-adc9-2003ca6001b2","Type":"ContainerStarted","Data":"0ab601f76b0c81023bf2a5364697d8f636c826fd0acd8767bb6c045e4bc4f333"} Feb 23 10:23:31 crc kubenswrapper[4904]: I0223 10:23:31.948832 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-82gvj" event={"ID":"7aa76c96-25cf-4196-a18f-9a33f9d9e195","Type":"ContainerStarted","Data":"6bad5a09d26df69d1b649515287fcd7d996cca56ba04b29ac5338bba5ad06025"} Feb 23 10:23:31 crc kubenswrapper[4904]: I0223 10:23:31.949034 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-82gvj" event={"ID":"7aa76c96-25cf-4196-a18f-9a33f9d9e195","Type":"ContainerStarted","Data":"efa4de75bf1f5f5e6dbbd0c2f2de2dc548080b7ac1a82dab0500e363c861a623"} Feb 23 10:23:31 crc kubenswrapper[4904]: I0223 10:23:31.950042 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-82gvj" Feb 23 10:23:31 crc kubenswrapper[4904]: I0223 10:23:31.950571 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-82gvj" Feb 23 10:23:31 crc kubenswrapper[4904]: I0223 10:23:31.953837 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c16f60f3-f488-4d2c-858e-dee1662f8f4b","Type":"ContainerStarted","Data":"76919d8e169880fe34517cd43f6ea88f93766f7cde2a23d44f31c68b1d848278"} Feb 23 10:23:32 crc kubenswrapper[4904]: E0223 10:23:32.010944 4904 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d97f71b_bfdd_4f9e_913d_8cbdb05c3e58.slice/crio-conmon-e8b5e09e7d237a7be6a8ceb39a427560b8336f1f42ee43c8575da764bfd692dc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d97f71b_bfdd_4f9e_913d_8cbdb05c3e58.slice/crio-e8b5e09e7d237a7be6a8ceb39a427560b8336f1f42ee43c8575da764bfd692dc.scope\": RecentStats: unable to find data in memory cache]" Feb 23 10:23:32 crc kubenswrapper[4904]: I0223 10:23:32.013366 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=22.597010084 podStartE2EDuration="32.013338602s" podCreationTimestamp="2026-02-23 10:23:00 +0000 UTC" firstStartedPulling="2026-02-23 10:23:21.41597616 +0000 UTC m=+1034.836349673" lastFinishedPulling="2026-02-23 10:23:30.832304638 +0000 UTC m=+1044.252678191" observedRunningTime="2026-02-23 10:23:31.999746936 +0000 UTC m=+1045.420120479" watchObservedRunningTime="2026-02-23 10:23:32.013338602 +0000 UTC m=+1045.433712125" Feb 23 10:23:32 crc kubenswrapper[4904]: I0223 10:23:32.047485 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=19.157733175 podStartE2EDuration="28.04746975s" podCreationTimestamp="2026-02-23 10:23:04 +0000 UTC" firstStartedPulling="2026-02-23 10:23:21.923210924 +0000 UTC m=+1035.343584437" lastFinishedPulling="2026-02-23 10:23:30.812947489 +0000 UTC m=+1044.233321012" observedRunningTime="2026-02-23 10:23:32.027308138 +0000 UTC m=+1045.447681641" watchObservedRunningTime="2026-02-23 10:23:32.04746975 +0000 UTC m=+1045.467843263" Feb 23 10:23:32 crc kubenswrapper[4904]: I0223 10:23:32.067924 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-82gvj" podStartSLOduration=22.891651895 podStartE2EDuration="32.06790238s" podCreationTimestamp="2026-02-23 10:23:00 +0000 UTC" firstStartedPulling="2026-02-23 10:23:17.617202192 +0000 UTC m=+1031.037575715" lastFinishedPulling="2026-02-23 10:23:26.793452687 +0000 UTC m=+1040.213826200" observedRunningTime="2026-02-23 10:23:32.057471074 +0000 UTC m=+1045.477844577" watchObservedRunningTime="2026-02-23 10:23:32.06790238 +0000 UTC m=+1045.488275893" Feb 23 10:23:32 crc kubenswrapper[4904]: I0223 10:23:32.217886 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:32 crc kubenswrapper[4904]: I0223 10:23:32.217950 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:32 crc kubenswrapper[4904]: I0223 10:23:32.300374 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:32 crc kubenswrapper[4904]: I0223 10:23:32.613342 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:32 crc kubenswrapper[4904]: I0223 10:23:32.678313 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:32 crc kubenswrapper[4904]: I0223 10:23:32.968066 4904 generic.go:334] "Generic (PLEG): container finished" podID="404e5fa5-dbcb-4e7e-ad52-96f65cb16015" containerID="afafc19a277bf8c2ad4be43545bc5042a1b3fb97cfbd65bf24f27e9c6e018d8d" exitCode=0 Feb 23 10:23:32 crc kubenswrapper[4904]: I0223 10:23:32.968195 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"404e5fa5-dbcb-4e7e-ad52-96f65cb16015","Type":"ContainerDied","Data":"afafc19a277bf8c2ad4be43545bc5042a1b3fb97cfbd65bf24f27e9c6e018d8d"} Feb 23 10:23:32 crc kubenswrapper[4904]: I0223 10:23:32.973461 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kj95b" event={"ID":"0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58","Type":"ContainerStarted","Data":"f3c36ef5ace311ca046ee9f0eb7a3e7d89e6aa28c92e948f3b6a6e44627d2d34"} Feb 23 10:23:32 crc kubenswrapper[4904]: I0223 10:23:32.974041 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-kj95b" Feb 23 10:23:32 crc kubenswrapper[4904]: I0223 10:23:32.975177 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:33 crc kubenswrapper[4904]: I0223 10:23:33.025138 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-kj95b" podStartSLOduration=3.198111026 podStartE2EDuration="42.025112743s" podCreationTimestamp="2026-02-23 10:22:51 +0000 UTC" firstStartedPulling="2026-02-23 10:22:52.820089113 +0000 UTC m=+1006.240462626" lastFinishedPulling="2026-02-23 10:23:31.64709084 +0000 UTC m=+1045.067464343" observedRunningTime="2026-02-23 10:23:33.022856479 +0000 UTC m=+1046.443230012" watchObservedRunningTime="2026-02-23 10:23:33.025112743 +0000 UTC m=+1046.445486256" Feb 23 10:23:33 crc kubenswrapper[4904]: I0223 10:23:33.984763 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c2e03468-b21e-4a61-afd3-08f3c10c102d","Type":"ContainerStarted","Data":"2534f09eb18331e70600ae007092c7ba6754ee6b5e42daa81da8c54710ecbbff"} Feb 23 10:23:33 crc kubenswrapper[4904]: I0223 10:23:33.988016 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"404e5fa5-dbcb-4e7e-ad52-96f65cb16015","Type":"ContainerStarted","Data":"96ba38e118e5da7d0866b0d19083c13fe48280a5aee6fb6b2f489d2e220af634"} Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.046163 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.047275 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.048176 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=34.563902267 podStartE2EDuration="40.048160695s" podCreationTimestamp="2026-02-23 10:22:54 +0000 UTC" firstStartedPulling="2026-02-23 10:23:21.667490137 +0000 UTC m=+1035.087863650" lastFinishedPulling="2026-02-23 10:23:27.151748565 +0000 UTC m=+1040.572122078" observedRunningTime="2026-02-23 10:23:34.046405405 +0000 UTC m=+1047.466778918" watchObservedRunningTime="2026-02-23 10:23:34.048160695 +0000 UTC m=+1047.468534218" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.270476 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kj95b"] Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.369194 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-2rpnk"] Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.370831 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-2rpnk" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.376365 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.421230 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-2rpnk"] Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.519969 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47150059-30d3-4904-88ca-861f02a664ba-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-2rpnk\" (UID: \"47150059-30d3-4904-88ca-861f02a664ba\") " pod="openstack/dnsmasq-dns-7fd796d7df-2rpnk" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.520075 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flxll\" (UniqueName: \"kubernetes.io/projected/47150059-30d3-4904-88ca-861f02a664ba-kube-api-access-flxll\") pod \"dnsmasq-dns-7fd796d7df-2rpnk\" (UID: \"47150059-30d3-4904-88ca-861f02a664ba\") " pod="openstack/dnsmasq-dns-7fd796d7df-2rpnk" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.520631 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47150059-30d3-4904-88ca-861f02a664ba-config\") pod \"dnsmasq-dns-7fd796d7df-2rpnk\" (UID: \"47150059-30d3-4904-88ca-861f02a664ba\") " pod="openstack/dnsmasq-dns-7fd796d7df-2rpnk" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.520739 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47150059-30d3-4904-88ca-861f02a664ba-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-2rpnk\" (UID: \"47150059-30d3-4904-88ca-861f02a664ba\") " pod="openstack/dnsmasq-dns-7fd796d7df-2rpnk" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.601536 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-zqsmt"] Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.625524 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-zqsmt" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.626667 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47150059-30d3-4904-88ca-861f02a664ba-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-2rpnk\" (UID: \"47150059-30d3-4904-88ca-861f02a664ba\") " pod="openstack/dnsmasq-dns-7fd796d7df-2rpnk" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.626792 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47150059-30d3-4904-88ca-861f02a664ba-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-2rpnk\" (UID: \"47150059-30d3-4904-88ca-861f02a664ba\") " pod="openstack/dnsmasq-dns-7fd796d7df-2rpnk" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.626874 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flxll\" (UniqueName: \"kubernetes.io/projected/47150059-30d3-4904-88ca-861f02a664ba-kube-api-access-flxll\") pod \"dnsmasq-dns-7fd796d7df-2rpnk\" (UID: \"47150059-30d3-4904-88ca-861f02a664ba\") " pod="openstack/dnsmasq-dns-7fd796d7df-2rpnk" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.626909 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47150059-30d3-4904-88ca-861f02a664ba-config\") pod \"dnsmasq-dns-7fd796d7df-2rpnk\" (UID: \"47150059-30d3-4904-88ca-861f02a664ba\") " pod="openstack/dnsmasq-dns-7fd796d7df-2rpnk" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.627865 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47150059-30d3-4904-88ca-861f02a664ba-config\") pod \"dnsmasq-dns-7fd796d7df-2rpnk\" (UID: \"47150059-30d3-4904-88ca-861f02a664ba\") " pod="openstack/dnsmasq-dns-7fd796d7df-2rpnk" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.628608 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47150059-30d3-4904-88ca-861f02a664ba-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-2rpnk\" (UID: \"47150059-30d3-4904-88ca-861f02a664ba\") " pod="openstack/dnsmasq-dns-7fd796d7df-2rpnk" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.629307 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47150059-30d3-4904-88ca-861f02a664ba-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-2rpnk\" (UID: \"47150059-30d3-4904-88ca-861f02a664ba\") " pod="openstack/dnsmasq-dns-7fd796d7df-2rpnk" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.636775 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.699353 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-zqsmt"] Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.733069 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trr2w\" (UniqueName: \"kubernetes.io/projected/28bca101-cf50-4eba-a2fe-e55dbc4fe121-kube-api-access-trr2w\") pod \"ovn-controller-metrics-zqsmt\" (UID: \"28bca101-cf50-4eba-a2fe-e55dbc4fe121\") " pod="openstack/ovn-controller-metrics-zqsmt" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.733145 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/28bca101-cf50-4eba-a2fe-e55dbc4fe121-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-zqsmt\" (UID: \"28bca101-cf50-4eba-a2fe-e55dbc4fe121\") " pod="openstack/ovn-controller-metrics-zqsmt" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.733191 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/28bca101-cf50-4eba-a2fe-e55dbc4fe121-ovn-rundir\") pod \"ovn-controller-metrics-zqsmt\" (UID: \"28bca101-cf50-4eba-a2fe-e55dbc4fe121\") " pod="openstack/ovn-controller-metrics-zqsmt" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.733223 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/28bca101-cf50-4eba-a2fe-e55dbc4fe121-ovs-rundir\") pod \"ovn-controller-metrics-zqsmt\" (UID: \"28bca101-cf50-4eba-a2fe-e55dbc4fe121\") " pod="openstack/ovn-controller-metrics-zqsmt" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.736866 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28bca101-cf50-4eba-a2fe-e55dbc4fe121-combined-ca-bundle\") pod \"ovn-controller-metrics-zqsmt\" (UID: \"28bca101-cf50-4eba-a2fe-e55dbc4fe121\") " pod="openstack/ovn-controller-metrics-zqsmt" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.737044 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28bca101-cf50-4eba-a2fe-e55dbc4fe121-config\") pod \"ovn-controller-metrics-zqsmt\" (UID: \"28bca101-cf50-4eba-a2fe-e55dbc4fe121\") " pod="openstack/ovn-controller-metrics-zqsmt" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.781315 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flxll\" (UniqueName: \"kubernetes.io/projected/47150059-30d3-4904-88ca-861f02a664ba-kube-api-access-flxll\") pod \"dnsmasq-dns-7fd796d7df-2rpnk\" (UID: \"47150059-30d3-4904-88ca-861f02a664ba\") " pod="openstack/dnsmasq-dns-7fd796d7df-2rpnk" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.797113 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-r72t2"] Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.847204 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trr2w\" (UniqueName: \"kubernetes.io/projected/28bca101-cf50-4eba-a2fe-e55dbc4fe121-kube-api-access-trr2w\") pod \"ovn-controller-metrics-zqsmt\" (UID: \"28bca101-cf50-4eba-a2fe-e55dbc4fe121\") " pod="openstack/ovn-controller-metrics-zqsmt" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.847286 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/28bca101-cf50-4eba-a2fe-e55dbc4fe121-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-zqsmt\" (UID: \"28bca101-cf50-4eba-a2fe-e55dbc4fe121\") " pod="openstack/ovn-controller-metrics-zqsmt" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.847341 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/28bca101-cf50-4eba-a2fe-e55dbc4fe121-ovn-rundir\") pod \"ovn-controller-metrics-zqsmt\" (UID: \"28bca101-cf50-4eba-a2fe-e55dbc4fe121\") " pod="openstack/ovn-controller-metrics-zqsmt" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.847367 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/28bca101-cf50-4eba-a2fe-e55dbc4fe121-ovs-rundir\") pod \"ovn-controller-metrics-zqsmt\" (UID: \"28bca101-cf50-4eba-a2fe-e55dbc4fe121\") " pod="openstack/ovn-controller-metrics-zqsmt" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.847389 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28bca101-cf50-4eba-a2fe-e55dbc4fe121-combined-ca-bundle\") pod \"ovn-controller-metrics-zqsmt\" (UID: \"28bca101-cf50-4eba-a2fe-e55dbc4fe121\") " pod="openstack/ovn-controller-metrics-zqsmt" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.847442 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28bca101-cf50-4eba-a2fe-e55dbc4fe121-config\") pod \"ovn-controller-metrics-zqsmt\" (UID: \"28bca101-cf50-4eba-a2fe-e55dbc4fe121\") " pod="openstack/ovn-controller-metrics-zqsmt" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.848486 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28bca101-cf50-4eba-a2fe-e55dbc4fe121-config\") pod \"ovn-controller-metrics-zqsmt\" (UID: \"28bca101-cf50-4eba-a2fe-e55dbc4fe121\") " pod="openstack/ovn-controller-metrics-zqsmt" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.851243 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/28bca101-cf50-4eba-a2fe-e55dbc4fe121-ovn-rundir\") pod \"ovn-controller-metrics-zqsmt\" (UID: \"28bca101-cf50-4eba-a2fe-e55dbc4fe121\") " pod="openstack/ovn-controller-metrics-zqsmt" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.851353 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/28bca101-cf50-4eba-a2fe-e55dbc4fe121-ovs-rundir\") pod \"ovn-controller-metrics-zqsmt\" (UID: \"28bca101-cf50-4eba-a2fe-e55dbc4fe121\") " pod="openstack/ovn-controller-metrics-zqsmt" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.875711 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28bca101-cf50-4eba-a2fe-e55dbc4fe121-combined-ca-bundle\") pod \"ovn-controller-metrics-zqsmt\" (UID: \"28bca101-cf50-4eba-a2fe-e55dbc4fe121\") " pod="openstack/ovn-controller-metrics-zqsmt" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.876628 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/28bca101-cf50-4eba-a2fe-e55dbc4fe121-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-zqsmt\" (UID: \"28bca101-cf50-4eba-a2fe-e55dbc4fe121\") " pod="openstack/ovn-controller-metrics-zqsmt" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.894377 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trr2w\" (UniqueName: \"kubernetes.io/projected/28bca101-cf50-4eba-a2fe-e55dbc4fe121-kube-api-access-trr2w\") pod \"ovn-controller-metrics-zqsmt\" (UID: \"28bca101-cf50-4eba-a2fe-e55dbc4fe121\") " pod="openstack/ovn-controller-metrics-zqsmt" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.899577 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.901469 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.915160 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-z9ncr"] Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.917075 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.935781 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.935951 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.936065 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-lv9lw" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.936182 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.936300 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.950111 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-z9ncr"] Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.960524 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-zqsmt" Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.966008 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 23 10:23:34 crc kubenswrapper[4904]: I0223 10:23:34.995110 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-2rpnk" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.006464 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-kj95b" podUID="0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58" containerName="dnsmasq-dns" containerID="cri-o://f3c36ef5ace311ca046ee9f0eb7a3e7d89e6aa28c92e948f3b6a6e44627d2d34" gracePeriod=10 Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.051190 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73361e7e-1ced-438c-9d84-f425467c6717-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-z9ncr\" (UID: \"73361e7e-1ced-438c-9d84-f425467c6717\") " pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.051574 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22141935-93c0-47b1-aa17-ca81106c5f5c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"22141935-93c0-47b1-aa17-ca81106c5f5c\") " pod="openstack/ovn-northd-0" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.051665 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73361e7e-1ced-438c-9d84-f425467c6717-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-z9ncr\" (UID: \"73361e7e-1ced-438c-9d84-f425467c6717\") " pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.051790 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2djqq\" (UniqueName: \"kubernetes.io/projected/73361e7e-1ced-438c-9d84-f425467c6717-kube-api-access-2djqq\") pod \"dnsmasq-dns-86db49b7ff-z9ncr\" (UID: \"73361e7e-1ced-438c-9d84-f425467c6717\") " pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.051901 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22141935-93c0-47b1-aa17-ca81106c5f5c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"22141935-93c0-47b1-aa17-ca81106c5f5c\") " pod="openstack/ovn-northd-0" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.052045 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73361e7e-1ced-438c-9d84-f425467c6717-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-z9ncr\" (UID: \"73361e7e-1ced-438c-9d84-f425467c6717\") " pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.052145 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22141935-93c0-47b1-aa17-ca81106c5f5c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"22141935-93c0-47b1-aa17-ca81106c5f5c\") " pod="openstack/ovn-northd-0" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.052225 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73361e7e-1ced-438c-9d84-f425467c6717-config\") pod \"dnsmasq-dns-86db49b7ff-z9ncr\" (UID: \"73361e7e-1ced-438c-9d84-f425467c6717\") " pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.052303 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crl2r\" (UniqueName: \"kubernetes.io/projected/22141935-93c0-47b1-aa17-ca81106c5f5c-kube-api-access-crl2r\") pod \"ovn-northd-0\" (UID: \"22141935-93c0-47b1-aa17-ca81106c5f5c\") " pod="openstack/ovn-northd-0" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.052524 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22141935-93c0-47b1-aa17-ca81106c5f5c-scripts\") pod \"ovn-northd-0\" (UID: \"22141935-93c0-47b1-aa17-ca81106c5f5c\") " pod="openstack/ovn-northd-0" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.052639 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22141935-93c0-47b1-aa17-ca81106c5f5c-config\") pod \"ovn-northd-0\" (UID: \"22141935-93c0-47b1-aa17-ca81106c5f5c\") " pod="openstack/ovn-northd-0" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.052771 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22141935-93c0-47b1-aa17-ca81106c5f5c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"22141935-93c0-47b1-aa17-ca81106c5f5c\") " pod="openstack/ovn-northd-0" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.156478 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73361e7e-1ced-438c-9d84-f425467c6717-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-z9ncr\" (UID: \"73361e7e-1ced-438c-9d84-f425467c6717\") " pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.156552 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22141935-93c0-47b1-aa17-ca81106c5f5c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"22141935-93c0-47b1-aa17-ca81106c5f5c\") " pod="openstack/ovn-northd-0" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.156578 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73361e7e-1ced-438c-9d84-f425467c6717-config\") pod \"dnsmasq-dns-86db49b7ff-z9ncr\" (UID: \"73361e7e-1ced-438c-9d84-f425467c6717\") " pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.156599 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crl2r\" (UniqueName: \"kubernetes.io/projected/22141935-93c0-47b1-aa17-ca81106c5f5c-kube-api-access-crl2r\") pod \"ovn-northd-0\" (UID: \"22141935-93c0-47b1-aa17-ca81106c5f5c\") " pod="openstack/ovn-northd-0" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.156646 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22141935-93c0-47b1-aa17-ca81106c5f5c-scripts\") pod \"ovn-northd-0\" (UID: \"22141935-93c0-47b1-aa17-ca81106c5f5c\") " pod="openstack/ovn-northd-0" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.156675 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22141935-93c0-47b1-aa17-ca81106c5f5c-config\") pod \"ovn-northd-0\" (UID: \"22141935-93c0-47b1-aa17-ca81106c5f5c\") " pod="openstack/ovn-northd-0" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.156749 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22141935-93c0-47b1-aa17-ca81106c5f5c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"22141935-93c0-47b1-aa17-ca81106c5f5c\") " pod="openstack/ovn-northd-0" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.156818 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73361e7e-1ced-438c-9d84-f425467c6717-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-z9ncr\" (UID: \"73361e7e-1ced-438c-9d84-f425467c6717\") " pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.157128 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22141935-93c0-47b1-aa17-ca81106c5f5c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"22141935-93c0-47b1-aa17-ca81106c5f5c\") " pod="openstack/ovn-northd-0" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.157174 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73361e7e-1ced-438c-9d84-f425467c6717-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-z9ncr\" (UID: \"73361e7e-1ced-438c-9d84-f425467c6717\") " pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.157201 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2djqq\" (UniqueName: \"kubernetes.io/projected/73361e7e-1ced-438c-9d84-f425467c6717-kube-api-access-2djqq\") pod \"dnsmasq-dns-86db49b7ff-z9ncr\" (UID: \"73361e7e-1ced-438c-9d84-f425467c6717\") " pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.157226 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22141935-93c0-47b1-aa17-ca81106c5f5c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"22141935-93c0-47b1-aa17-ca81106c5f5c\") " pod="openstack/ovn-northd-0" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.159606 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73361e7e-1ced-438c-9d84-f425467c6717-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-z9ncr\" (UID: \"73361e7e-1ced-438c-9d84-f425467c6717\") " pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.162543 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73361e7e-1ced-438c-9d84-f425467c6717-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-z9ncr\" (UID: \"73361e7e-1ced-438c-9d84-f425467c6717\") " pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.163511 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73361e7e-1ced-438c-9d84-f425467c6717-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-z9ncr\" (UID: \"73361e7e-1ced-438c-9d84-f425467c6717\") " pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.163882 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73361e7e-1ced-438c-9d84-f425467c6717-config\") pod \"dnsmasq-dns-86db49b7ff-z9ncr\" (UID: \"73361e7e-1ced-438c-9d84-f425467c6717\") " pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.165538 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22141935-93c0-47b1-aa17-ca81106c5f5c-scripts\") pod \"ovn-northd-0\" (UID: \"22141935-93c0-47b1-aa17-ca81106c5f5c\") " pod="openstack/ovn-northd-0" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.167650 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22141935-93c0-47b1-aa17-ca81106c5f5c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"22141935-93c0-47b1-aa17-ca81106c5f5c\") " pod="openstack/ovn-northd-0" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.168283 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22141935-93c0-47b1-aa17-ca81106c5f5c-config\") pod \"ovn-northd-0\" (UID: \"22141935-93c0-47b1-aa17-ca81106c5f5c\") " pod="openstack/ovn-northd-0" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.170325 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22141935-93c0-47b1-aa17-ca81106c5f5c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"22141935-93c0-47b1-aa17-ca81106c5f5c\") " pod="openstack/ovn-northd-0" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.171663 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22141935-93c0-47b1-aa17-ca81106c5f5c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"22141935-93c0-47b1-aa17-ca81106c5f5c\") " pod="openstack/ovn-northd-0" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.172594 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22141935-93c0-47b1-aa17-ca81106c5f5c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"22141935-93c0-47b1-aa17-ca81106c5f5c\") " pod="openstack/ovn-northd-0" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.198833 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crl2r\" (UniqueName: \"kubernetes.io/projected/22141935-93c0-47b1-aa17-ca81106c5f5c-kube-api-access-crl2r\") pod \"ovn-northd-0\" (UID: \"22141935-93c0-47b1-aa17-ca81106c5f5c\") " pod="openstack/ovn-northd-0" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.239290 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2djqq\" (UniqueName: \"kubernetes.io/projected/73361e7e-1ced-438c-9d84-f425467c6717-kube-api-access-2djqq\") pod \"dnsmasq-dns-86db49b7ff-z9ncr\" (UID: \"73361e7e-1ced-438c-9d84-f425467c6717\") " pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.275893 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.451217 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.495695 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-2rpnk"] Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.608644 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-zqsmt"] Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.779145 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.779208 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 23 10:23:35 crc kubenswrapper[4904]: I0223 10:23:35.873977 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 23 10:23:36 crc kubenswrapper[4904]: I0223 10:23:36.016131 4904 generic.go:334] "Generic (PLEG): container finished" podID="0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58" containerID="f3c36ef5ace311ca046ee9f0eb7a3e7d89e6aa28c92e948f3b6a6e44627d2d34" exitCode=0 Feb 23 10:23:36 crc kubenswrapper[4904]: I0223 10:23:36.016266 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kj95b" event={"ID":"0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58","Type":"ContainerDied","Data":"f3c36ef5ace311ca046ee9f0eb7a3e7d89e6aa28c92e948f3b6a6e44627d2d34"} Feb 23 10:23:36 crc kubenswrapper[4904]: I0223 10:23:36.670047 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-r72t2" Feb 23 10:23:36 crc kubenswrapper[4904]: I0223 10:23:36.689460 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95e0989c-1669-4e5b-99ed-bab3cadf50f6-config\") pod \"95e0989c-1669-4e5b-99ed-bab3cadf50f6\" (UID: \"95e0989c-1669-4e5b-99ed-bab3cadf50f6\") " Feb 23 10:23:36 crc kubenswrapper[4904]: I0223 10:23:36.689702 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95e0989c-1669-4e5b-99ed-bab3cadf50f6-dns-svc\") pod \"95e0989c-1669-4e5b-99ed-bab3cadf50f6\" (UID: \"95e0989c-1669-4e5b-99ed-bab3cadf50f6\") " Feb 23 10:23:36 crc kubenswrapper[4904]: I0223 10:23:36.689874 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc94b\" (UniqueName: \"kubernetes.io/projected/95e0989c-1669-4e5b-99ed-bab3cadf50f6-kube-api-access-nc94b\") pod \"95e0989c-1669-4e5b-99ed-bab3cadf50f6\" (UID: \"95e0989c-1669-4e5b-99ed-bab3cadf50f6\") " Feb 23 10:23:36 crc kubenswrapper[4904]: I0223 10:23:36.694853 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95e0989c-1669-4e5b-99ed-bab3cadf50f6-config" (OuterVolumeSpecName: "config") pod "95e0989c-1669-4e5b-99ed-bab3cadf50f6" (UID: "95e0989c-1669-4e5b-99ed-bab3cadf50f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:23:36 crc kubenswrapper[4904]: I0223 10:23:36.695323 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95e0989c-1669-4e5b-99ed-bab3cadf50f6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "95e0989c-1669-4e5b-99ed-bab3cadf50f6" (UID: "95e0989c-1669-4e5b-99ed-bab3cadf50f6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:23:36 crc kubenswrapper[4904]: I0223 10:23:36.701336 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95e0989c-1669-4e5b-99ed-bab3cadf50f6-kube-api-access-nc94b" (OuterVolumeSpecName: "kube-api-access-nc94b") pod "95e0989c-1669-4e5b-99ed-bab3cadf50f6" (UID: "95e0989c-1669-4e5b-99ed-bab3cadf50f6"). InnerVolumeSpecName "kube-api-access-nc94b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:23:36 crc kubenswrapper[4904]: I0223 10:23:36.792384 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95e0989c-1669-4e5b-99ed-bab3cadf50f6-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:36 crc kubenswrapper[4904]: I0223 10:23:36.792412 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95e0989c-1669-4e5b-99ed-bab3cadf50f6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:36 crc kubenswrapper[4904]: I0223 10:23:36.792422 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc94b\" (UniqueName: \"kubernetes.io/projected/95e0989c-1669-4e5b-99ed-bab3cadf50f6-kube-api-access-nc94b\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:37 crc kubenswrapper[4904]: I0223 10:23:37.059318 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-r72t2" event={"ID":"95e0989c-1669-4e5b-99ed-bab3cadf50f6","Type":"ContainerDied","Data":"c997ea828668f42433a52ee66666ed901cf9b89ae7a38b0fd66596165f1707cd"} Feb 23 10:23:37 crc kubenswrapper[4904]: I0223 10:23:37.059422 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-r72t2" Feb 23 10:23:37 crc kubenswrapper[4904]: I0223 10:23:37.066477 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-kj95b" Feb 23 10:23:37 crc kubenswrapper[4904]: I0223 10:23:37.073104 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kj95b" event={"ID":"0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58","Type":"ContainerDied","Data":"a50f155337283ab21a1d005002e84c4d682745b2837c6a6568b209c9d5e76c6d"} Feb 23 10:23:37 crc kubenswrapper[4904]: I0223 10:23:37.073174 4904 scope.go:117] "RemoveContainer" containerID="f3c36ef5ace311ca046ee9f0eb7a3e7d89e6aa28c92e948f3b6a6e44627d2d34" Feb 23 10:23:37 crc kubenswrapper[4904]: I0223 10:23:37.106858 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xvzd\" (UniqueName: \"kubernetes.io/projected/0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58-kube-api-access-2xvzd\") pod \"0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58\" (UID: \"0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58\") " Feb 23 10:23:37 crc kubenswrapper[4904]: I0223 10:23:37.107018 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58-config\") pod \"0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58\" (UID: \"0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58\") " Feb 23 10:23:37 crc kubenswrapper[4904]: I0223 10:23:37.107155 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58-dns-svc\") pod \"0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58\" (UID: \"0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58\") " Feb 23 10:23:37 crc kubenswrapper[4904]: I0223 10:23:37.109985 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-zqsmt" event={"ID":"28bca101-cf50-4eba-a2fe-e55dbc4fe121","Type":"ContainerStarted","Data":"a5d45ed85686fd4a193b0183058cb5bde5d4e16f10e580006a4ab4d08363ea04"} Feb 23 10:23:37 crc kubenswrapper[4904]: I0223 10:23:37.131584 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-2rpnk" event={"ID":"47150059-30d3-4904-88ca-861f02a664ba","Type":"ContainerStarted","Data":"3278a819ba173e1e0812242dcd47b6a1deddf7ea05d81392af60640c46c481e0"} Feb 23 10:23:37 crc kubenswrapper[4904]: I0223 10:23:37.140053 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58-kube-api-access-2xvzd" (OuterVolumeSpecName: "kube-api-access-2xvzd") pod "0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58" (UID: "0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58"). InnerVolumeSpecName "kube-api-access-2xvzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:23:37 crc kubenswrapper[4904]: I0223 10:23:37.212785 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xvzd\" (UniqueName: \"kubernetes.io/projected/0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58-kube-api-access-2xvzd\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:37 crc kubenswrapper[4904]: I0223 10:23:37.337221 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58" (UID: "0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:23:37 crc kubenswrapper[4904]: I0223 10:23:37.338317 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58-config" (OuterVolumeSpecName: "config") pod "0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58" (UID: "0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:23:37 crc kubenswrapper[4904]: W0223 10:23:37.400116 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22141935_93c0_47b1_aa17_ca81106c5f5c.slice/crio-3a7ed043a4f7871550babdae9a9e7d327ed5188bf8e2c02c085470aaaeaa6dcf WatchSource:0}: Error finding container 3a7ed043a4f7871550babdae9a9e7d327ed5188bf8e2c02c085470aaaeaa6dcf: Status 404 returned error can't find the container with id 3a7ed043a4f7871550babdae9a9e7d327ed5188bf8e2c02c085470aaaeaa6dcf Feb 23 10:23:37 crc kubenswrapper[4904]: I0223 10:23:37.408056 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 23 10:23:37 crc kubenswrapper[4904]: I0223 10:23:37.408101 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-z9ncr"] Feb 23 10:23:37 crc kubenswrapper[4904]: I0223 10:23:37.427520 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:37 crc kubenswrapper[4904]: I0223 10:23:37.427954 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:37 crc kubenswrapper[4904]: I0223 10:23:37.532821 4904 scope.go:117] "RemoveContainer" containerID="e8b5e09e7d237a7be6a8ceb39a427560b8336f1f42ee43c8575da764bfd692dc" Feb 23 10:23:37 crc kubenswrapper[4904]: I0223 10:23:37.583226 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-r72t2"] Feb 23 10:23:37 crc kubenswrapper[4904]: I0223 10:23:37.588677 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-r72t2"] Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.148412 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22141935-93c0-47b1-aa17-ca81106c5f5c","Type":"ContainerStarted","Data":"3a7ed043a4f7871550babdae9a9e7d327ed5188bf8e2c02c085470aaaeaa6dcf"} Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.151233 4904 generic.go:334] "Generic (PLEG): container finished" podID="dd62fb2b-c564-4004-a886-f2d4bd1d3eda" containerID="61c1816af19d2a006e53c2585a487f27ce25eb18f6db49d8b009e2147acec83d" exitCode=0 Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.151300 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dd62fb2b-c564-4004-a886-f2d4bd1d3eda","Type":"ContainerDied","Data":"61c1816af19d2a006e53c2585a487f27ce25eb18f6db49d8b009e2147acec83d"} Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.157070 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-kj95b" Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.166192 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-zqsmt" event={"ID":"28bca101-cf50-4eba-a2fe-e55dbc4fe121","Type":"ContainerStarted","Data":"80cdc7f43ff15fbe58964212eb539aba37aaa6d5ce4be99c83fe1c27af4fb727"} Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.169297 4904 generic.go:334] "Generic (PLEG): container finished" podID="73361e7e-1ced-438c-9d84-f425467c6717" containerID="32063db43b3ecae51a75431366d7d062145777b354dd4f1ab80bf6d87b92e7be" exitCode=0 Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.169373 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" event={"ID":"73361e7e-1ced-438c-9d84-f425467c6717","Type":"ContainerDied","Data":"32063db43b3ecae51a75431366d7d062145777b354dd4f1ab80bf6d87b92e7be"} Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.169456 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" event={"ID":"73361e7e-1ced-438c-9d84-f425467c6717","Type":"ContainerStarted","Data":"dc28a555c934c48797ecc04c5fd7c971cf02435b5de0dc78db9dba74d9b69343"} Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.171929 4904 generic.go:334] "Generic (PLEG): container finished" podID="47150059-30d3-4904-88ca-861f02a664ba" containerID="33530eb86b1fc17eb24ed65fe094884a48d8f51db25868dca816bbb069cbb145" exitCode=0 Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.171993 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-2rpnk" event={"ID":"47150059-30d3-4904-88ca-861f02a664ba","Type":"ContainerDied","Data":"33530eb86b1fc17eb24ed65fe094884a48d8f51db25868dca816bbb069cbb145"} Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.222297 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kj95b"] Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.231264 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kj95b"] Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.269817 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-zqsmt" podStartSLOduration=4.269785913 podStartE2EDuration="4.269785913s" podCreationTimestamp="2026-02-23 10:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:23:38.267088816 +0000 UTC m=+1051.687462329" watchObservedRunningTime="2026-02-23 10:23:38.269785913 +0000 UTC m=+1051.690159416" Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.553361 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.591945 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-2rpnk"] Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.617543 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-89b5g"] Feb 23 10:23:38 crc kubenswrapper[4904]: E0223 10:23:38.618349 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58" containerName="init" Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.618370 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58" containerName="init" Feb 23 10:23:38 crc kubenswrapper[4904]: E0223 10:23:38.618402 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58" containerName="dnsmasq-dns" Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.618410 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58" containerName="dnsmasq-dns" Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.618571 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58" containerName="dnsmasq-dns" Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.621682 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-89b5g" Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.651428 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-89b5g\" (UID: \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\") " pod="openstack/dnsmasq-dns-698758b865-89b5g" Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.651512 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-89b5g\" (UID: \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\") " pod="openstack/dnsmasq-dns-698758b865-89b5g" Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.651577 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-config\") pod \"dnsmasq-dns-698758b865-89b5g\" (UID: \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\") " pod="openstack/dnsmasq-dns-698758b865-89b5g" Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.651602 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fg6x\" (UniqueName: \"kubernetes.io/projected/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-kube-api-access-8fg6x\") pod \"dnsmasq-dns-698758b865-89b5g\" (UID: \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\") " pod="openstack/dnsmasq-dns-698758b865-89b5g" Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.651627 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-dns-svc\") pod \"dnsmasq-dns-698758b865-89b5g\" (UID: \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\") " pod="openstack/dnsmasq-dns-698758b865-89b5g" Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.665814 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-89b5g"] Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.753493 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-89b5g\" (UID: \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\") " pod="openstack/dnsmasq-dns-698758b865-89b5g" Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.753587 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-config\") pod \"dnsmasq-dns-698758b865-89b5g\" (UID: \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\") " pod="openstack/dnsmasq-dns-698758b865-89b5g" Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.753630 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fg6x\" (UniqueName: \"kubernetes.io/projected/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-kube-api-access-8fg6x\") pod \"dnsmasq-dns-698758b865-89b5g\" (UID: \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\") " pod="openstack/dnsmasq-dns-698758b865-89b5g" Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.753666 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-dns-svc\") pod \"dnsmasq-dns-698758b865-89b5g\" (UID: \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\") " pod="openstack/dnsmasq-dns-698758b865-89b5g" Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.753776 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-89b5g\" (UID: \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\") " pod="openstack/dnsmasq-dns-698758b865-89b5g" Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.754674 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-config\") pod \"dnsmasq-dns-698758b865-89b5g\" (UID: \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\") " pod="openstack/dnsmasq-dns-698758b865-89b5g" Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.755326 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-89b5g\" (UID: \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\") " pod="openstack/dnsmasq-dns-698758b865-89b5g" Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.755496 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-89b5g\" (UID: \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\") " pod="openstack/dnsmasq-dns-698758b865-89b5g" Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.756138 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-dns-svc\") pod \"dnsmasq-dns-698758b865-89b5g\" (UID: \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\") " pod="openstack/dnsmasq-dns-698758b865-89b5g" Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.787626 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fg6x\" (UniqueName: \"kubernetes.io/projected/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-kube-api-access-8fg6x\") pod \"dnsmasq-dns-698758b865-89b5g\" (UID: \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\") " pod="openstack/dnsmasq-dns-698758b865-89b5g" Feb 23 10:23:38 crc kubenswrapper[4904]: I0223 10:23:38.955419 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-89b5g" Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.185963 4904 generic.go:334] "Generic (PLEG): container finished" podID="c2e03468-b21e-4a61-afd3-08f3c10c102d" containerID="2534f09eb18331e70600ae007092c7ba6754ee6b5e42daa81da8c54710ecbbff" exitCode=0 Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.187136 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c2e03468-b21e-4a61-afd3-08f3c10c102d","Type":"ContainerDied","Data":"2534f09eb18331e70600ae007092c7ba6754ee6b5e42daa81da8c54710ecbbff"} Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.288942 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58" path="/var/lib/kubelet/pods/0d97f71b-bfdd-4f9e-913d-8cbdb05c3e58/volumes" Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.306055 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95e0989c-1669-4e5b-99ed-bab3cadf50f6" path="/var/lib/kubelet/pods/95e0989c-1669-4e5b-99ed-bab3cadf50f6/volumes" Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.645644 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-89b5g"] Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.758005 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.773866 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.778310 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.781147 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-7kz85" Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.781248 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.781353 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.781513 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.818130 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/71f24a32-6e0a-4a39-9570-92c373672a9b-lock\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") " pod="openstack/swift-storage-0" Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.818195 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2ttc\" (UniqueName: \"kubernetes.io/projected/71f24a32-6e0a-4a39-9570-92c373672a9b-kube-api-access-x2ttc\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") " pod="openstack/swift-storage-0" Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.818226 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/71f24a32-6e0a-4a39-9570-92c373672a9b-cache\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") " pod="openstack/swift-storage-0" Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.818283 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/71f24a32-6e0a-4a39-9570-92c373672a9b-etc-swift\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") " pod="openstack/swift-storage-0" Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.818319 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") " pod="openstack/swift-storage-0" Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.818406 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71f24a32-6e0a-4a39-9570-92c373672a9b-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") " pod="openstack/swift-storage-0" Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.919953 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71f24a32-6e0a-4a39-9570-92c373672a9b-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") " pod="openstack/swift-storage-0" Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.920041 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/71f24a32-6e0a-4a39-9570-92c373672a9b-lock\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") " pod="openstack/swift-storage-0" Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.920068 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2ttc\" (UniqueName: \"kubernetes.io/projected/71f24a32-6e0a-4a39-9570-92c373672a9b-kube-api-access-x2ttc\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") " pod="openstack/swift-storage-0" Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.920090 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/71f24a32-6e0a-4a39-9570-92c373672a9b-cache\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") " pod="openstack/swift-storage-0" Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.920116 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/71f24a32-6e0a-4a39-9570-92c373672a9b-etc-swift\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") " pod="openstack/swift-storage-0" Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.920147 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") " pod="openstack/swift-storage-0" Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.920706 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/swift-storage-0" Feb 23 10:23:39 crc kubenswrapper[4904]: E0223 10:23:39.922182 4904 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 10:23:39 crc kubenswrapper[4904]: E0223 10:23:39.922219 4904 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 10:23:39 crc kubenswrapper[4904]: E0223 10:23:39.922309 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71f24a32-6e0a-4a39-9570-92c373672a9b-etc-swift podName:71f24a32-6e0a-4a39-9570-92c373672a9b nodeName:}" failed. No retries permitted until 2026-02-23 10:23:40.422277255 +0000 UTC m=+1053.842650768 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/71f24a32-6e0a-4a39-9570-92c373672a9b-etc-swift") pod "swift-storage-0" (UID: "71f24a32-6e0a-4a39-9570-92c373672a9b") : configmap "swift-ring-files" not found Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.923254 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/71f24a32-6e0a-4a39-9570-92c373672a9b-cache\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") " pod="openstack/swift-storage-0" Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.923328 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/71f24a32-6e0a-4a39-9570-92c373672a9b-lock\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") " pod="openstack/swift-storage-0" Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.927680 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71f24a32-6e0a-4a39-9570-92c373672a9b-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") " pod="openstack/swift-storage-0" Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.943233 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2ttc\" (UniqueName: \"kubernetes.io/projected/71f24a32-6e0a-4a39-9570-92c373672a9b-kube-api-access-x2ttc\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") " pod="openstack/swift-storage-0" Feb 23 10:23:39 crc kubenswrapper[4904]: I0223 10:23:39.954731 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") " pod="openstack/swift-storage-0" Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.202579 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22141935-93c0-47b1-aa17-ca81106c5f5c","Type":"ContainerStarted","Data":"9f3161fd98c61dc5a77ced0a459488c9d07a8a30bfd0290306c4859fd406027e"} Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.202669 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22141935-93c0-47b1-aa17-ca81106c5f5c","Type":"ContainerStarted","Data":"4d6e7b970ccdcd6e964326d79a7bb85c065f50d8eebf5c170eccf12d0a22bef5"} Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.204035 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.208303 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c2e03468-b21e-4a61-afd3-08f3c10c102d","Type":"ContainerStarted","Data":"024d3ae2e7f1857f409ffe9c4b9f84570b218294c87653cd5d9a86f7817edbb2"} Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.214361 4904 generic.go:334] "Generic (PLEG): container finished" podID="b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd" containerID="601e87cb8e0a01d692d92986aeafa11c631326d911a94e200e580536d62df253" exitCode=0 Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.214471 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-89b5g" event={"ID":"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd","Type":"ContainerDied","Data":"601e87cb8e0a01d692d92986aeafa11c631326d911a94e200e580536d62df253"} Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.214515 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-89b5g" event={"ID":"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd","Type":"ContainerStarted","Data":"2aad8aec7e2d52177876a89035e795115a15bb7eb92cd933f85f628b58b976ef"} Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.221969 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" event={"ID":"73361e7e-1ced-438c-9d84-f425467c6717","Type":"ContainerStarted","Data":"ae097e71fa68fe2f15968aed3c4334d9794e62e543c046f80e04710fb2d49bb2"} Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.222153 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.244229 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-2rpnk" event={"ID":"47150059-30d3-4904-88ca-861f02a664ba","Type":"ContainerStarted","Data":"8a2cac87ebd1244f9bf4d059dacac70354a4568ca112fdaf8d9cc4ebd079bab2"} Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.244486 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-2rpnk" podUID="47150059-30d3-4904-88ca-861f02a664ba" containerName="dnsmasq-dns" containerID="cri-o://8a2cac87ebd1244f9bf4d059dacac70354a4568ca112fdaf8d9cc4ebd079bab2" gracePeriod=10 Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.244598 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-2rpnk" Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.248669 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=4.557635751 podStartE2EDuration="6.248646126s" podCreationTimestamp="2026-02-23 10:23:34 +0000 UTC" firstStartedPulling="2026-02-23 10:23:37.414641146 +0000 UTC m=+1050.835014659" lastFinishedPulling="2026-02-23 10:23:39.105651511 +0000 UTC m=+1052.526025034" observedRunningTime="2026-02-23 10:23:40.241312838 +0000 UTC m=+1053.661686351" watchObservedRunningTime="2026-02-23 10:23:40.248646126 +0000 UTC m=+1053.669019639" Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.331694 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" podStartSLOduration=6.331676693 podStartE2EDuration="6.331676693s" podCreationTimestamp="2026-02-23 10:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:23:40.328017389 +0000 UTC m=+1053.748390912" watchObservedRunningTime="2026-02-23 10:23:40.331676693 +0000 UTC m=+1053.752050206" Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.359987 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371988.494831 podStartE2EDuration="48.359944635s" podCreationTimestamp="2026-02-23 10:22:52 +0000 UTC" firstStartedPulling="2026-02-23 10:23:03.217651926 +0000 UTC m=+1016.638025449" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:23:40.354249733 +0000 UTC m=+1053.774623246" watchObservedRunningTime="2026-02-23 10:23:40.359944635 +0000 UTC m=+1053.780318148" Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.383281 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-2rpnk" podStartSLOduration=6.383243246 podStartE2EDuration="6.383243246s" podCreationTimestamp="2026-02-23 10:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:23:40.37773552 +0000 UTC m=+1053.798109033" watchObservedRunningTime="2026-02-23 10:23:40.383243246 +0000 UTC m=+1053.803616759" Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.411197 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 23 10:23:40 crc kubenswrapper[4904]: E0223 10:23:40.442479 4904 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 10:23:40 crc kubenswrapper[4904]: E0223 10:23:40.442524 4904 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 10:23:40 crc kubenswrapper[4904]: E0223 10:23:40.442594 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71f24a32-6e0a-4a39-9570-92c373672a9b-etc-swift podName:71f24a32-6e0a-4a39-9570-92c373672a9b nodeName:}" failed. No retries permitted until 2026-02-23 10:23:41.442562609 +0000 UTC m=+1054.862936322 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/71f24a32-6e0a-4a39-9570-92c373672a9b-etc-swift") pod "swift-storage-0" (UID: "71f24a32-6e0a-4a39-9570-92c373672a9b") : configmap "swift-ring-files" not found Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.443303 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/71f24a32-6e0a-4a39-9570-92c373672a9b-etc-swift\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") " pod="openstack/swift-storage-0" Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.534898 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.737149 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-2rpnk" Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.857492 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47150059-30d3-4904-88ca-861f02a664ba-dns-svc\") pod \"47150059-30d3-4904-88ca-861f02a664ba\" (UID: \"47150059-30d3-4904-88ca-861f02a664ba\") " Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.857612 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47150059-30d3-4904-88ca-861f02a664ba-config\") pod \"47150059-30d3-4904-88ca-861f02a664ba\" (UID: \"47150059-30d3-4904-88ca-861f02a664ba\") " Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.857719 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flxll\" (UniqueName: \"kubernetes.io/projected/47150059-30d3-4904-88ca-861f02a664ba-kube-api-access-flxll\") pod \"47150059-30d3-4904-88ca-861f02a664ba\" (UID: \"47150059-30d3-4904-88ca-861f02a664ba\") " Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.857840 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47150059-30d3-4904-88ca-861f02a664ba-ovsdbserver-nb\") pod \"47150059-30d3-4904-88ca-861f02a664ba\" (UID: \"47150059-30d3-4904-88ca-861f02a664ba\") " Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.868112 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47150059-30d3-4904-88ca-861f02a664ba-kube-api-access-flxll" (OuterVolumeSpecName: "kube-api-access-flxll") pod "47150059-30d3-4904-88ca-861f02a664ba" (UID: "47150059-30d3-4904-88ca-861f02a664ba"). InnerVolumeSpecName "kube-api-access-flxll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.909770 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47150059-30d3-4904-88ca-861f02a664ba-config" (OuterVolumeSpecName: "config") pod "47150059-30d3-4904-88ca-861f02a664ba" (UID: "47150059-30d3-4904-88ca-861f02a664ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.909791 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47150059-30d3-4904-88ca-861f02a664ba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "47150059-30d3-4904-88ca-861f02a664ba" (UID: "47150059-30d3-4904-88ca-861f02a664ba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.931095 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47150059-30d3-4904-88ca-861f02a664ba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "47150059-30d3-4904-88ca-861f02a664ba" (UID: "47150059-30d3-4904-88ca-861f02a664ba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.959996 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47150059-30d3-4904-88ca-861f02a664ba-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.960291 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47150059-30d3-4904-88ca-861f02a664ba-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.960359 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flxll\" (UniqueName: \"kubernetes.io/projected/47150059-30d3-4904-88ca-861f02a664ba-kube-api-access-flxll\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:40 crc kubenswrapper[4904]: I0223 10:23:40.960527 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47150059-30d3-4904-88ca-861f02a664ba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:41 crc kubenswrapper[4904]: I0223 10:23:41.288325 4904 generic.go:334] "Generic (PLEG): container finished" podID="47150059-30d3-4904-88ca-861f02a664ba" containerID="8a2cac87ebd1244f9bf4d059dacac70354a4568ca112fdaf8d9cc4ebd079bab2" exitCode=0 Feb 23 10:23:41 crc kubenswrapper[4904]: I0223 10:23:41.289577 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-2rpnk" Feb 23 10:23:41 crc kubenswrapper[4904]: I0223 10:23:41.299941 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-89b5g" Feb 23 10:23:41 crc kubenswrapper[4904]: I0223 10:23:41.300081 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-89b5g" event={"ID":"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd","Type":"ContainerStarted","Data":"8e0c0c2ee0fed55ae446a6a3f38e25cf313f5f0659b65ccd8a05cb1c83891807"} Feb 23 10:23:41 crc kubenswrapper[4904]: I0223 10:23:41.300297 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-2rpnk" event={"ID":"47150059-30d3-4904-88ca-861f02a664ba","Type":"ContainerDied","Data":"8a2cac87ebd1244f9bf4d059dacac70354a4568ca112fdaf8d9cc4ebd079bab2"} Feb 23 10:23:41 crc kubenswrapper[4904]: I0223 10:23:41.300335 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-2rpnk" event={"ID":"47150059-30d3-4904-88ca-861f02a664ba","Type":"ContainerDied","Data":"3278a819ba173e1e0812242dcd47b6a1deddf7ea05d81392af60640c46c481e0"} Feb 23 10:23:41 crc kubenswrapper[4904]: I0223 10:23:41.300378 4904 scope.go:117] "RemoveContainer" containerID="8a2cac87ebd1244f9bf4d059dacac70354a4568ca112fdaf8d9cc4ebd079bab2" Feb 23 10:23:41 crc kubenswrapper[4904]: I0223 10:23:41.307206 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-89b5g" podStartSLOduration=3.307179625 podStartE2EDuration="3.307179625s" podCreationTimestamp="2026-02-23 10:23:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:23:41.300960138 +0000 UTC m=+1054.721333651" watchObservedRunningTime="2026-02-23 10:23:41.307179625 +0000 UTC m=+1054.727553138" Feb 23 10:23:41 crc kubenswrapper[4904]: I0223 10:23:41.340685 4904 scope.go:117] "RemoveContainer" containerID="33530eb86b1fc17eb24ed65fe094884a48d8f51db25868dca816bbb069cbb145" Feb 23 10:23:41 crc kubenswrapper[4904]: I0223 10:23:41.341521 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-2rpnk"] Feb 23 10:23:41 crc kubenswrapper[4904]: I0223 10:23:41.349520 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-2rpnk"] Feb 23 10:23:41 crc kubenswrapper[4904]: I0223 10:23:41.361299 4904 scope.go:117] "RemoveContainer" containerID="8a2cac87ebd1244f9bf4d059dacac70354a4568ca112fdaf8d9cc4ebd079bab2" Feb 23 10:23:41 crc kubenswrapper[4904]: E0223 10:23:41.362053 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a2cac87ebd1244f9bf4d059dacac70354a4568ca112fdaf8d9cc4ebd079bab2\": container with ID starting with 8a2cac87ebd1244f9bf4d059dacac70354a4568ca112fdaf8d9cc4ebd079bab2 not found: ID does not exist" containerID="8a2cac87ebd1244f9bf4d059dacac70354a4568ca112fdaf8d9cc4ebd079bab2" Feb 23 10:23:41 crc kubenswrapper[4904]: I0223 10:23:41.362124 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a2cac87ebd1244f9bf4d059dacac70354a4568ca112fdaf8d9cc4ebd079bab2"} err="failed to get container status \"8a2cac87ebd1244f9bf4d059dacac70354a4568ca112fdaf8d9cc4ebd079bab2\": rpc error: code = NotFound desc = could not find container \"8a2cac87ebd1244f9bf4d059dacac70354a4568ca112fdaf8d9cc4ebd079bab2\": container with ID starting with 8a2cac87ebd1244f9bf4d059dacac70354a4568ca112fdaf8d9cc4ebd079bab2 not found: ID does not exist" Feb 23 10:23:41 crc kubenswrapper[4904]: I0223 10:23:41.362188 4904 scope.go:117] "RemoveContainer" containerID="33530eb86b1fc17eb24ed65fe094884a48d8f51db25868dca816bbb069cbb145" Feb 23 10:23:41 crc kubenswrapper[4904]: E0223 10:23:41.362516 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33530eb86b1fc17eb24ed65fe094884a48d8f51db25868dca816bbb069cbb145\": container with ID starting with 33530eb86b1fc17eb24ed65fe094884a48d8f51db25868dca816bbb069cbb145 not found: ID does not exist" containerID="33530eb86b1fc17eb24ed65fe094884a48d8f51db25868dca816bbb069cbb145" Feb 23 10:23:41 crc kubenswrapper[4904]: I0223 10:23:41.362558 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33530eb86b1fc17eb24ed65fe094884a48d8f51db25868dca816bbb069cbb145"} err="failed to get container status \"33530eb86b1fc17eb24ed65fe094884a48d8f51db25868dca816bbb069cbb145\": rpc error: code = NotFound desc = could not find container \"33530eb86b1fc17eb24ed65fe094884a48d8f51db25868dca816bbb069cbb145\": container with ID starting with 33530eb86b1fc17eb24ed65fe094884a48d8f51db25868dca816bbb069cbb145 not found: ID does not exist" Feb 23 10:23:41 crc kubenswrapper[4904]: I0223 10:23:41.473067 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/71f24a32-6e0a-4a39-9570-92c373672a9b-etc-swift\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") " pod="openstack/swift-storage-0" Feb 23 10:23:41 crc kubenswrapper[4904]: E0223 10:23:41.473602 4904 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 10:23:41 crc kubenswrapper[4904]: E0223 10:23:41.473648 4904 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 10:23:41 crc kubenswrapper[4904]: E0223 10:23:41.473799 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71f24a32-6e0a-4a39-9570-92c373672a9b-etc-swift podName:71f24a32-6e0a-4a39-9570-92c373672a9b nodeName:}" failed. No retries permitted until 2026-02-23 10:23:43.47370076 +0000 UTC m=+1056.894074273 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/71f24a32-6e0a-4a39-9570-92c373672a9b-etc-swift") pod "swift-storage-0" (UID: "71f24a32-6e0a-4a39-9570-92c373672a9b") : configmap "swift-ring-files" not found Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.268692 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47150059-30d3-4904-88ca-861f02a664ba" path="/var/lib/kubelet/pods/47150059-30d3-4904-88ca-861f02a664ba/volumes" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.519478 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/71f24a32-6e0a-4a39-9570-92c373672a9b-etc-swift\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") " pod="openstack/swift-storage-0" Feb 23 10:23:43 crc kubenswrapper[4904]: E0223 10:23:43.519748 4904 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 10:23:43 crc kubenswrapper[4904]: E0223 10:23:43.519769 4904 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 10:23:43 crc kubenswrapper[4904]: E0223 10:23:43.519828 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71f24a32-6e0a-4a39-9570-92c373672a9b-etc-swift podName:71f24a32-6e0a-4a39-9570-92c373672a9b nodeName:}" failed. No retries permitted until 2026-02-23 10:23:47.519809552 +0000 UTC m=+1060.940183065 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/71f24a32-6e0a-4a39-9570-92c373672a9b-etc-swift") pod "swift-storage-0" (UID: "71f24a32-6e0a-4a39-9570-92c373672a9b") : configmap "swift-ring-files" not found Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.593228 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-hddnt"] Feb 23 10:23:43 crc kubenswrapper[4904]: E0223 10:23:43.594274 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47150059-30d3-4904-88ca-861f02a664ba" containerName="dnsmasq-dns" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.594310 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="47150059-30d3-4904-88ca-861f02a664ba" containerName="dnsmasq-dns" Feb 23 10:23:43 crc kubenswrapper[4904]: E0223 10:23:43.594348 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47150059-30d3-4904-88ca-861f02a664ba" containerName="init" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.594358 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="47150059-30d3-4904-88ca-861f02a664ba" containerName="init" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.594577 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="47150059-30d3-4904-88ca-861f02a664ba" containerName="dnsmasq-dns" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.595489 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.597431 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.601368 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.601372 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.618586 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-hddnt"] Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.727111 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnq2k\" (UniqueName: \"kubernetes.io/projected/8b69c9fa-305e-484f-98e7-c8928bec7a13-kube-api-access-nnq2k\") pod \"swift-ring-rebalance-hddnt\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.727164 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b69c9fa-305e-484f-98e7-c8928bec7a13-scripts\") pod \"swift-ring-rebalance-hddnt\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.727209 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8b69c9fa-305e-484f-98e7-c8928bec7a13-swiftconf\") pod \"swift-ring-rebalance-hddnt\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.727248 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8b69c9fa-305e-484f-98e7-c8928bec7a13-etc-swift\") pod \"swift-ring-rebalance-hddnt\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.727278 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8b69c9fa-305e-484f-98e7-c8928bec7a13-ring-data-devices\") pod \"swift-ring-rebalance-hddnt\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.727315 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b69c9fa-305e-484f-98e7-c8928bec7a13-combined-ca-bundle\") pod \"swift-ring-rebalance-hddnt\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.727398 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8b69c9fa-305e-484f-98e7-c8928bec7a13-dispersionconf\") pod \"swift-ring-rebalance-hddnt\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.830859 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b69c9fa-305e-484f-98e7-c8928bec7a13-combined-ca-bundle\") pod \"swift-ring-rebalance-hddnt\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.831123 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8b69c9fa-305e-484f-98e7-c8928bec7a13-dispersionconf\") pod \"swift-ring-rebalance-hddnt\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.831171 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnq2k\" (UniqueName: \"kubernetes.io/projected/8b69c9fa-305e-484f-98e7-c8928bec7a13-kube-api-access-nnq2k\") pod \"swift-ring-rebalance-hddnt\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.831205 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b69c9fa-305e-484f-98e7-c8928bec7a13-scripts\") pod \"swift-ring-rebalance-hddnt\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.831303 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8b69c9fa-305e-484f-98e7-c8928bec7a13-swiftconf\") pod \"swift-ring-rebalance-hddnt\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.831391 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8b69c9fa-305e-484f-98e7-c8928bec7a13-etc-swift\") pod \"swift-ring-rebalance-hddnt\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.831449 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8b69c9fa-305e-484f-98e7-c8928bec7a13-ring-data-devices\") pod \"swift-ring-rebalance-hddnt\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.832605 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8b69c9fa-305e-484f-98e7-c8928bec7a13-ring-data-devices\") pod \"swift-ring-rebalance-hddnt\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.832928 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8b69c9fa-305e-484f-98e7-c8928bec7a13-etc-swift\") pod \"swift-ring-rebalance-hddnt\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.833076 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b69c9fa-305e-484f-98e7-c8928bec7a13-scripts\") pod \"swift-ring-rebalance-hddnt\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.840680 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8b69c9fa-305e-484f-98e7-c8928bec7a13-dispersionconf\") pod \"swift-ring-rebalance-hddnt\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.842409 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b69c9fa-305e-484f-98e7-c8928bec7a13-combined-ca-bundle\") pod \"swift-ring-rebalance-hddnt\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.854674 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8b69c9fa-305e-484f-98e7-c8928bec7a13-swiftconf\") pod \"swift-ring-rebalance-hddnt\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.861024 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnq2k\" (UniqueName: \"kubernetes.io/projected/8b69c9fa-305e-484f-98e7-c8928bec7a13-kube-api-access-nnq2k\") pod \"swift-ring-rebalance-hddnt\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:23:43 crc kubenswrapper[4904]: I0223 10:23:43.919297 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:23:44 crc kubenswrapper[4904]: I0223 10:23:44.348997 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 23 10:23:44 crc kubenswrapper[4904]: I0223 10:23:44.349362 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 23 10:23:44 crc kubenswrapper[4904]: I0223 10:23:44.584698 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-tpl7h"] Feb 23 10:23:44 crc kubenswrapper[4904]: I0223 10:23:44.586072 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tpl7h" Feb 23 10:23:44 crc kubenswrapper[4904]: I0223 10:23:44.588587 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 23 10:23:44 crc kubenswrapper[4904]: I0223 10:23:44.600348 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tpl7h"] Feb 23 10:23:44 crc kubenswrapper[4904]: I0223 10:23:44.652105 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d-operator-scripts\") pod \"root-account-create-update-tpl7h\" (UID: \"cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d\") " pod="openstack/root-account-create-update-tpl7h" Feb 23 10:23:44 crc kubenswrapper[4904]: I0223 10:23:44.652287 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvnzd\" (UniqueName: \"kubernetes.io/projected/cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d-kube-api-access-rvnzd\") pod \"root-account-create-update-tpl7h\" (UID: \"cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d\") " pod="openstack/root-account-create-update-tpl7h" Feb 23 10:23:44 crc kubenswrapper[4904]: I0223 10:23:44.754684 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d-operator-scripts\") pod \"root-account-create-update-tpl7h\" (UID: \"cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d\") " pod="openstack/root-account-create-update-tpl7h" Feb 23 10:23:44 crc kubenswrapper[4904]: I0223 10:23:44.754957 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvnzd\" (UniqueName: \"kubernetes.io/projected/cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d-kube-api-access-rvnzd\") pod \"root-account-create-update-tpl7h\" (UID: \"cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d\") " pod="openstack/root-account-create-update-tpl7h" Feb 23 10:23:44 crc kubenswrapper[4904]: I0223 10:23:44.755612 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d-operator-scripts\") pod \"root-account-create-update-tpl7h\" (UID: \"cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d\") " pod="openstack/root-account-create-update-tpl7h" Feb 23 10:23:44 crc kubenswrapper[4904]: I0223 10:23:44.773707 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvnzd\" (UniqueName: \"kubernetes.io/projected/cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d-kube-api-access-rvnzd\") pod \"root-account-create-update-tpl7h\" (UID: \"cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d\") " pod="openstack/root-account-create-update-tpl7h" Feb 23 10:23:44 crc kubenswrapper[4904]: I0223 10:23:44.905399 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tpl7h" Feb 23 10:23:45 crc kubenswrapper[4904]: I0223 10:23:45.453953 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" Feb 23 10:23:46 crc kubenswrapper[4904]: I0223 10:23:46.973398 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 23 10:23:47 crc kubenswrapper[4904]: I0223 10:23:47.054270 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 23 10:23:47 crc kubenswrapper[4904]: I0223 10:23:47.241144 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9169-account-create-update-j44z4"] Feb 23 10:23:47 crc kubenswrapper[4904]: I0223 10:23:47.242976 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9169-account-create-update-j44z4" Feb 23 10:23:47 crc kubenswrapper[4904]: I0223 10:23:47.245571 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 23 10:23:47 crc kubenswrapper[4904]: I0223 10:23:47.307321 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3ef6d86-3b22-41cd-9a28-f4e7844ec25f-operator-scripts\") pod \"placement-9169-account-create-update-j44z4\" (UID: \"b3ef6d86-3b22-41cd-9a28-f4e7844ec25f\") " pod="openstack/placement-9169-account-create-update-j44z4" Feb 23 10:23:47 crc kubenswrapper[4904]: I0223 10:23:47.307528 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmfhx\" (UniqueName: \"kubernetes.io/projected/b3ef6d86-3b22-41cd-9a28-f4e7844ec25f-kube-api-access-lmfhx\") pod \"placement-9169-account-create-update-j44z4\" (UID: \"b3ef6d86-3b22-41cd-9a28-f4e7844ec25f\") " pod="openstack/placement-9169-account-create-update-j44z4" Feb 23 10:23:47 crc kubenswrapper[4904]: I0223 10:23:47.315297 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9169-account-create-update-j44z4"] Feb 23 10:23:47 crc kubenswrapper[4904]: I0223 10:23:47.409585 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmfhx\" (UniqueName: \"kubernetes.io/projected/b3ef6d86-3b22-41cd-9a28-f4e7844ec25f-kube-api-access-lmfhx\") pod \"placement-9169-account-create-update-j44z4\" (UID: \"b3ef6d86-3b22-41cd-9a28-f4e7844ec25f\") " pod="openstack/placement-9169-account-create-update-j44z4" Feb 23 10:23:47 crc kubenswrapper[4904]: I0223 10:23:47.409706 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3ef6d86-3b22-41cd-9a28-f4e7844ec25f-operator-scripts\") pod \"placement-9169-account-create-update-j44z4\" (UID: \"b3ef6d86-3b22-41cd-9a28-f4e7844ec25f\") " pod="openstack/placement-9169-account-create-update-j44z4" Feb 23 10:23:47 crc kubenswrapper[4904]: I0223 10:23:47.411059 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3ef6d86-3b22-41cd-9a28-f4e7844ec25f-operator-scripts\") pod \"placement-9169-account-create-update-j44z4\" (UID: \"b3ef6d86-3b22-41cd-9a28-f4e7844ec25f\") " pod="openstack/placement-9169-account-create-update-j44z4" Feb 23 10:23:47 crc kubenswrapper[4904]: I0223 10:23:47.436153 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmfhx\" (UniqueName: \"kubernetes.io/projected/b3ef6d86-3b22-41cd-9a28-f4e7844ec25f-kube-api-access-lmfhx\") pod \"placement-9169-account-create-update-j44z4\" (UID: \"b3ef6d86-3b22-41cd-9a28-f4e7844ec25f\") " pod="openstack/placement-9169-account-create-update-j44z4" Feb 23 10:23:47 crc kubenswrapper[4904]: I0223 10:23:47.585828 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9169-account-create-update-j44z4" Feb 23 10:23:47 crc kubenswrapper[4904]: I0223 10:23:47.613521 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/71f24a32-6e0a-4a39-9570-92c373672a9b-etc-swift\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") " pod="openstack/swift-storage-0" Feb 23 10:23:47 crc kubenswrapper[4904]: E0223 10:23:47.613806 4904 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 10:23:47 crc kubenswrapper[4904]: E0223 10:23:47.613829 4904 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 10:23:47 crc kubenswrapper[4904]: E0223 10:23:47.613904 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71f24a32-6e0a-4a39-9570-92c373672a9b-etc-swift podName:71f24a32-6e0a-4a39-9570-92c373672a9b nodeName:}" failed. No retries permitted until 2026-02-23 10:23:55.61387173 +0000 UTC m=+1069.034245283 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/71f24a32-6e0a-4a39-9570-92c373672a9b-etc-swift") pod "swift-storage-0" (UID: "71f24a32-6e0a-4a39-9570-92c373672a9b") : configmap "swift-ring-files" not found Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.407075 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dd62fb2b-c564-4004-a886-f2d4bd1d3eda","Type":"ContainerStarted","Data":"204774cf3c338a14fa8418d4b7dabbbc8f8086735d48069849b565df6f421a3b"} Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.480272 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9169-account-create-update-j44z4"] Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.492813 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-c9pnc"] Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.493965 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-c9pnc" Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.501996 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-c9pnc"] Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.555471 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hzmq\" (UniqueName: \"kubernetes.io/projected/83e2ded5-4041-484c-b117-6df53876c328-kube-api-access-5hzmq\") pod \"watcher-db-create-c9pnc\" (UID: \"83e2ded5-4041-484c-b117-6df53876c328\") " pod="openstack/watcher-db-create-c9pnc" Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.555766 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83e2ded5-4041-484c-b117-6df53876c328-operator-scripts\") pod \"watcher-db-create-c9pnc\" (UID: \"83e2ded5-4041-484c-b117-6df53876c328\") " pod="openstack/watcher-db-create-c9pnc" Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.568990 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tpl7h"] Feb 23 10:23:48 crc kubenswrapper[4904]: W0223 10:23:48.582861 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbd3ac15_7ce7_4833_a9a4_df5aefacfc9d.slice/crio-47f5bb2a75d3970c654c4b2fc530c1f00691f248691c75875209d1797deba61d WatchSource:0}: Error finding container 47f5bb2a75d3970c654c4b2fc530c1f00691f248691c75875209d1797deba61d: Status 404 returned error can't find the container with id 47f5bb2a75d3970c654c4b2fc530c1f00691f248691c75875209d1797deba61d Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.584154 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-hddnt"] Feb 23 10:23:48 crc kubenswrapper[4904]: W0223 10:23:48.595028 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b69c9fa_305e_484f_98e7_c8928bec7a13.slice/crio-114ddb7efa65ac480244b7d1f1a2ccf483d592cecb8f1b88414bcdec1085bb6c WatchSource:0}: Error finding container 114ddb7efa65ac480244b7d1f1a2ccf483d592cecb8f1b88414bcdec1085bb6c: Status 404 returned error can't find the container with id 114ddb7efa65ac480244b7d1f1a2ccf483d592cecb8f1b88414bcdec1085bb6c Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.598219 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-a4fe-account-create-update-xbczb"] Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.599936 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-a4fe-account-create-update-xbczb" Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.603442 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.608615 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-a4fe-account-create-update-xbczb"] Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.657821 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v97m\" (UniqueName: \"kubernetes.io/projected/94708a44-bcf8-4084-8ba8-8c1ffdcf70e4-kube-api-access-8v97m\") pod \"watcher-a4fe-account-create-update-xbczb\" (UID: \"94708a44-bcf8-4084-8ba8-8c1ffdcf70e4\") " pod="openstack/watcher-a4fe-account-create-update-xbczb" Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.657901 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hzmq\" (UniqueName: \"kubernetes.io/projected/83e2ded5-4041-484c-b117-6df53876c328-kube-api-access-5hzmq\") pod \"watcher-db-create-c9pnc\" (UID: \"83e2ded5-4041-484c-b117-6df53876c328\") " pod="openstack/watcher-db-create-c9pnc" Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.657924 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83e2ded5-4041-484c-b117-6df53876c328-operator-scripts\") pod \"watcher-db-create-c9pnc\" (UID: \"83e2ded5-4041-484c-b117-6df53876c328\") " pod="openstack/watcher-db-create-c9pnc" Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.658370 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94708a44-bcf8-4084-8ba8-8c1ffdcf70e4-operator-scripts\") pod \"watcher-a4fe-account-create-update-xbczb\" (UID: \"94708a44-bcf8-4084-8ba8-8c1ffdcf70e4\") " pod="openstack/watcher-a4fe-account-create-update-xbczb" Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.658866 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83e2ded5-4041-484c-b117-6df53876c328-operator-scripts\") pod \"watcher-db-create-c9pnc\" (UID: \"83e2ded5-4041-484c-b117-6df53876c328\") " pod="openstack/watcher-db-create-c9pnc" Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.676194 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hzmq\" (UniqueName: \"kubernetes.io/projected/83e2ded5-4041-484c-b117-6df53876c328-kube-api-access-5hzmq\") pod \"watcher-db-create-c9pnc\" (UID: \"83e2ded5-4041-484c-b117-6df53876c328\") " pod="openstack/watcher-db-create-c9pnc" Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.762096 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94708a44-bcf8-4084-8ba8-8c1ffdcf70e4-operator-scripts\") pod \"watcher-a4fe-account-create-update-xbczb\" (UID: \"94708a44-bcf8-4084-8ba8-8c1ffdcf70e4\") " pod="openstack/watcher-a4fe-account-create-update-xbczb" Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.762215 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v97m\" (UniqueName: \"kubernetes.io/projected/94708a44-bcf8-4084-8ba8-8c1ffdcf70e4-kube-api-access-8v97m\") pod \"watcher-a4fe-account-create-update-xbczb\" (UID: \"94708a44-bcf8-4084-8ba8-8c1ffdcf70e4\") " pod="openstack/watcher-a4fe-account-create-update-xbczb" Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.764588 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94708a44-bcf8-4084-8ba8-8c1ffdcf70e4-operator-scripts\") pod \"watcher-a4fe-account-create-update-xbczb\" (UID: \"94708a44-bcf8-4084-8ba8-8c1ffdcf70e4\") " pod="openstack/watcher-a4fe-account-create-update-xbczb" Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.790385 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v97m\" (UniqueName: \"kubernetes.io/projected/94708a44-bcf8-4084-8ba8-8c1ffdcf70e4-kube-api-access-8v97m\") pod \"watcher-a4fe-account-create-update-xbczb\" (UID: \"94708a44-bcf8-4084-8ba8-8c1ffdcf70e4\") " pod="openstack/watcher-a4fe-account-create-update-xbczb" Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.837236 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-c9pnc" Feb 23 10:23:48 crc kubenswrapper[4904]: I0223 10:23:48.973976 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-89b5g" Feb 23 10:23:49 crc kubenswrapper[4904]: I0223 10:23:49.001494 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-a4fe-account-create-update-xbczb" Feb 23 10:23:49 crc kubenswrapper[4904]: I0223 10:23:49.086781 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-z9ncr"] Feb 23 10:23:49 crc kubenswrapper[4904]: I0223 10:23:49.087115 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" podUID="73361e7e-1ced-438c-9d84-f425467c6717" containerName="dnsmasq-dns" containerID="cri-o://ae097e71fa68fe2f15968aed3c4334d9794e62e543c046f80e04710fb2d49bb2" gracePeriod=10 Feb 23 10:23:49 crc kubenswrapper[4904]: I0223 10:23:49.385190 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-c9pnc"] Feb 23 10:23:49 crc kubenswrapper[4904]: W0223 10:23:49.403584 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83e2ded5_4041_484c_b117_6df53876c328.slice/crio-31889bdf6f7fae00b134fd39ef49770a0227de6f267aaf7a093cc9782b053452 WatchSource:0}: Error finding container 31889bdf6f7fae00b134fd39ef49770a0227de6f267aaf7a093cc9782b053452: Status 404 returned error can't find the container with id 31889bdf6f7fae00b134fd39ef49770a0227de6f267aaf7a093cc9782b053452 Feb 23 10:23:49 crc kubenswrapper[4904]: I0223 10:23:49.417302 4904 generic.go:334] "Generic (PLEG): container finished" podID="b3ef6d86-3b22-41cd-9a28-f4e7844ec25f" containerID="abace7ea57a9d1b994083f2eb64ab5689be1c30b0a24fdd96c1e2e601746dcea" exitCode=0 Feb 23 10:23:49 crc kubenswrapper[4904]: I0223 10:23:49.417380 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9169-account-create-update-j44z4" event={"ID":"b3ef6d86-3b22-41cd-9a28-f4e7844ec25f","Type":"ContainerDied","Data":"abace7ea57a9d1b994083f2eb64ab5689be1c30b0a24fdd96c1e2e601746dcea"} Feb 23 10:23:49 crc kubenswrapper[4904]: I0223 10:23:49.417410 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9169-account-create-update-j44z4" event={"ID":"b3ef6d86-3b22-41cd-9a28-f4e7844ec25f","Type":"ContainerStarted","Data":"fa201353027feff53e56c967e4b389ed4225a0e967d8186a7880508c3f2af112"} Feb 23 10:23:49 crc kubenswrapper[4904]: I0223 10:23:49.418190 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hddnt" event={"ID":"8b69c9fa-305e-484f-98e7-c8928bec7a13","Type":"ContainerStarted","Data":"114ddb7efa65ac480244b7d1f1a2ccf483d592cecb8f1b88414bcdec1085bb6c"} Feb 23 10:23:49 crc kubenswrapper[4904]: I0223 10:23:49.419226 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-c9pnc" event={"ID":"83e2ded5-4041-484c-b117-6df53876c328","Type":"ContainerStarted","Data":"31889bdf6f7fae00b134fd39ef49770a0227de6f267aaf7a093cc9782b053452"} Feb 23 10:23:49 crc kubenswrapper[4904]: I0223 10:23:49.420900 4904 generic.go:334] "Generic (PLEG): container finished" podID="cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d" containerID="68999a16abbf0c15459e023d535ce66cdb82e11d6ed962593d1a83abfe8cc1c7" exitCode=0 Feb 23 10:23:49 crc kubenswrapper[4904]: I0223 10:23:49.420948 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tpl7h" event={"ID":"cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d","Type":"ContainerDied","Data":"68999a16abbf0c15459e023d535ce66cdb82e11d6ed962593d1a83abfe8cc1c7"} Feb 23 10:23:49 crc kubenswrapper[4904]: I0223 10:23:49.420964 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tpl7h" event={"ID":"cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d","Type":"ContainerStarted","Data":"47f5bb2a75d3970c654c4b2fc530c1f00691f248691c75875209d1797deba61d"} Feb 23 10:23:49 crc kubenswrapper[4904]: I0223 10:23:49.424162 4904 generic.go:334] "Generic (PLEG): container finished" podID="73361e7e-1ced-438c-9d84-f425467c6717" containerID="ae097e71fa68fe2f15968aed3c4334d9794e62e543c046f80e04710fb2d49bb2" exitCode=0 Feb 23 10:23:49 crc kubenswrapper[4904]: I0223 10:23:49.424206 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" event={"ID":"73361e7e-1ced-438c-9d84-f425467c6717","Type":"ContainerDied","Data":"ae097e71fa68fe2f15968aed3c4334d9794e62e543c046f80e04710fb2d49bb2"} Feb 23 10:23:49 crc kubenswrapper[4904]: I0223 10:23:49.616909 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-a4fe-account-create-update-xbczb"] Feb 23 10:23:49 crc kubenswrapper[4904]: I0223 10:23:49.733689 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" Feb 23 10:23:49 crc kubenswrapper[4904]: I0223 10:23:49.905842 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73361e7e-1ced-438c-9d84-f425467c6717-ovsdbserver-sb\") pod \"73361e7e-1ced-438c-9d84-f425467c6717\" (UID: \"73361e7e-1ced-438c-9d84-f425467c6717\") " Feb 23 10:23:49 crc kubenswrapper[4904]: I0223 10:23:49.905954 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73361e7e-1ced-438c-9d84-f425467c6717-ovsdbserver-nb\") pod \"73361e7e-1ced-438c-9d84-f425467c6717\" (UID: \"73361e7e-1ced-438c-9d84-f425467c6717\") " Feb 23 10:23:49 crc kubenswrapper[4904]: I0223 10:23:49.906106 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73361e7e-1ced-438c-9d84-f425467c6717-config\") pod \"73361e7e-1ced-438c-9d84-f425467c6717\" (UID: \"73361e7e-1ced-438c-9d84-f425467c6717\") " Feb 23 10:23:49 crc kubenswrapper[4904]: I0223 10:23:49.906129 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73361e7e-1ced-438c-9d84-f425467c6717-dns-svc\") pod \"73361e7e-1ced-438c-9d84-f425467c6717\" (UID: \"73361e7e-1ced-438c-9d84-f425467c6717\") " Feb 23 10:23:49 crc kubenswrapper[4904]: I0223 10:23:49.906167 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2djqq\" (UniqueName: \"kubernetes.io/projected/73361e7e-1ced-438c-9d84-f425467c6717-kube-api-access-2djqq\") pod \"73361e7e-1ced-438c-9d84-f425467c6717\" (UID: \"73361e7e-1ced-438c-9d84-f425467c6717\") " Feb 23 10:23:49 crc kubenswrapper[4904]: I0223 10:23:49.915336 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73361e7e-1ced-438c-9d84-f425467c6717-kube-api-access-2djqq" (OuterVolumeSpecName: "kube-api-access-2djqq") pod "73361e7e-1ced-438c-9d84-f425467c6717" (UID: "73361e7e-1ced-438c-9d84-f425467c6717"). InnerVolumeSpecName "kube-api-access-2djqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:23:50 crc kubenswrapper[4904]: I0223 10:23:50.008609 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2djqq\" (UniqueName: \"kubernetes.io/projected/73361e7e-1ced-438c-9d84-f425467c6717-kube-api-access-2djqq\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:50 crc kubenswrapper[4904]: I0223 10:23:50.064483 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73361e7e-1ced-438c-9d84-f425467c6717-config" (OuterVolumeSpecName: "config") pod "73361e7e-1ced-438c-9d84-f425467c6717" (UID: "73361e7e-1ced-438c-9d84-f425467c6717"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:23:50 crc kubenswrapper[4904]: I0223 10:23:50.066572 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73361e7e-1ced-438c-9d84-f425467c6717-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "73361e7e-1ced-438c-9d84-f425467c6717" (UID: "73361e7e-1ced-438c-9d84-f425467c6717"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:23:50 crc kubenswrapper[4904]: I0223 10:23:50.080686 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73361e7e-1ced-438c-9d84-f425467c6717-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "73361e7e-1ced-438c-9d84-f425467c6717" (UID: "73361e7e-1ced-438c-9d84-f425467c6717"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:23:50 crc kubenswrapper[4904]: I0223 10:23:50.083916 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73361e7e-1ced-438c-9d84-f425467c6717-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "73361e7e-1ced-438c-9d84-f425467c6717" (UID: "73361e7e-1ced-438c-9d84-f425467c6717"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:23:50 crc kubenswrapper[4904]: I0223 10:23:50.112158 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73361e7e-1ced-438c-9d84-f425467c6717-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:50 crc kubenswrapper[4904]: I0223 10:23:50.112567 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73361e7e-1ced-438c-9d84-f425467c6717-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:50 crc kubenswrapper[4904]: I0223 10:23:50.112651 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73361e7e-1ced-438c-9d84-f425467c6717-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:50 crc kubenswrapper[4904]: I0223 10:23:50.112709 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73361e7e-1ced-438c-9d84-f425467c6717-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:50 crc kubenswrapper[4904]: I0223 10:23:50.439534 4904 generic.go:334] "Generic (PLEG): container finished" podID="83e2ded5-4041-484c-b117-6df53876c328" containerID="1404d599ef0469f9373629e84dff01fdb884d2a9f9cf9f77b3a37eb69bcdb9d8" exitCode=0 Feb 23 10:23:50 crc kubenswrapper[4904]: I0223 10:23:50.439667 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-c9pnc" event={"ID":"83e2ded5-4041-484c-b117-6df53876c328","Type":"ContainerDied","Data":"1404d599ef0469f9373629e84dff01fdb884d2a9f9cf9f77b3a37eb69bcdb9d8"} Feb 23 10:23:50 crc kubenswrapper[4904]: I0223 10:23:50.442648 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" event={"ID":"73361e7e-1ced-438c-9d84-f425467c6717","Type":"ContainerDied","Data":"dc28a555c934c48797ecc04c5fd7c971cf02435b5de0dc78db9dba74d9b69343"} Feb 23 10:23:50 crc kubenswrapper[4904]: I0223 10:23:50.442725 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-z9ncr" Feb 23 10:23:50 crc kubenswrapper[4904]: I0223 10:23:50.442734 4904 scope.go:117] "RemoveContainer" containerID="ae097e71fa68fe2f15968aed3c4334d9794e62e543c046f80e04710fb2d49bb2" Feb 23 10:23:50 crc kubenswrapper[4904]: I0223 10:23:50.444320 4904 generic.go:334] "Generic (PLEG): container finished" podID="94708a44-bcf8-4084-8ba8-8c1ffdcf70e4" containerID="287b17de3d479a4678a9cce8e78fc2c7bda1eab41e76e77259bac2a25abbd69f" exitCode=0 Feb 23 10:23:50 crc kubenswrapper[4904]: I0223 10:23:50.444407 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-a4fe-account-create-update-xbczb" event={"ID":"94708a44-bcf8-4084-8ba8-8c1ffdcf70e4","Type":"ContainerDied","Data":"287b17de3d479a4678a9cce8e78fc2c7bda1eab41e76e77259bac2a25abbd69f"} Feb 23 10:23:50 crc kubenswrapper[4904]: I0223 10:23:50.444460 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-a4fe-account-create-update-xbczb" event={"ID":"94708a44-bcf8-4084-8ba8-8c1ffdcf70e4","Type":"ContainerStarted","Data":"0471faf849ca02683e7ddcba31ad67d5a6212290888bec76eb796d7e0e213c17"} Feb 23 10:23:50 crc kubenswrapper[4904]: I0223 10:23:50.707247 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-z9ncr"] Feb 23 10:23:50 crc kubenswrapper[4904]: I0223 10:23:50.717602 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-z9ncr"] Feb 23 10:23:51 crc kubenswrapper[4904]: I0223 10:23:51.268912 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73361e7e-1ced-438c-9d84-f425467c6717" path="/var/lib/kubelet/pods/73361e7e-1ced-438c-9d84-f425467c6717/volumes" Feb 23 10:23:51 crc kubenswrapper[4904]: I0223 10:23:51.461067 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dd62fb2b-c564-4004-a886-f2d4bd1d3eda","Type":"ContainerStarted","Data":"642019c9cbd02857f05fb0712d9b926595677146ad53618d42edceb8b025502a"} Feb 23 10:23:51 crc kubenswrapper[4904]: I0223 10:23:51.848590 4904 scope.go:117] "RemoveContainer" containerID="32063db43b3ecae51a75431366d7d062145777b354dd4f1ab80bf6d87b92e7be" Feb 23 10:23:51 crc kubenswrapper[4904]: I0223 10:23:51.902133 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tpl7h" Feb 23 10:23:52 crc kubenswrapper[4904]: I0223 10:23:52.059333 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvnzd\" (UniqueName: \"kubernetes.io/projected/cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d-kube-api-access-rvnzd\") pod \"cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d\" (UID: \"cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d\") " Feb 23 10:23:52 crc kubenswrapper[4904]: I0223 10:23:52.059495 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d-operator-scripts\") pod \"cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d\" (UID: \"cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d\") " Feb 23 10:23:52 crc kubenswrapper[4904]: I0223 10:23:52.060488 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d" (UID: "cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:23:52 crc kubenswrapper[4904]: I0223 10:23:52.067287 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d-kube-api-access-rvnzd" (OuterVolumeSpecName: "kube-api-access-rvnzd") pod "cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d" (UID: "cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d"). InnerVolumeSpecName "kube-api-access-rvnzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:23:52 crc kubenswrapper[4904]: I0223 10:23:52.162443 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvnzd\" (UniqueName: \"kubernetes.io/projected/cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d-kube-api-access-rvnzd\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:52 crc kubenswrapper[4904]: I0223 10:23:52.162523 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:52 crc kubenswrapper[4904]: I0223 10:23:52.480645 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tpl7h" event={"ID":"cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d","Type":"ContainerDied","Data":"47f5bb2a75d3970c654c4b2fc530c1f00691f248691c75875209d1797deba61d"} Feb 23 10:23:52 crc kubenswrapper[4904]: I0223 10:23:52.481851 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47f5bb2a75d3970c654c4b2fc530c1f00691f248691c75875209d1797deba61d" Feb 23 10:23:52 crc kubenswrapper[4904]: I0223 10:23:52.480939 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tpl7h" Feb 23 10:23:53 crc kubenswrapper[4904]: I0223 10:23:53.058796 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-tpl7h"] Feb 23 10:23:53 crc kubenswrapper[4904]: I0223 10:23:53.071321 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-tpl7h"] Feb 23 10:23:53 crc kubenswrapper[4904]: I0223 10:23:53.269750 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d" path="/var/lib/kubelet/pods/cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d/volumes" Feb 23 10:23:53 crc kubenswrapper[4904]: I0223 10:23:53.842148 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9169-account-create-update-j44z4" Feb 23 10:23:53 crc kubenswrapper[4904]: I0223 10:23:53.921223 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-a4fe-account-create-update-xbczb" Feb 23 10:23:53 crc kubenswrapper[4904]: I0223 10:23:53.964221 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-c9pnc" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.004015 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmfhx\" (UniqueName: \"kubernetes.io/projected/b3ef6d86-3b22-41cd-9a28-f4e7844ec25f-kube-api-access-lmfhx\") pod \"b3ef6d86-3b22-41cd-9a28-f4e7844ec25f\" (UID: \"b3ef6d86-3b22-41cd-9a28-f4e7844ec25f\") " Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.004140 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3ef6d86-3b22-41cd-9a28-f4e7844ec25f-operator-scripts\") pod \"b3ef6d86-3b22-41cd-9a28-f4e7844ec25f\" (UID: \"b3ef6d86-3b22-41cd-9a28-f4e7844ec25f\") " Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.005228 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3ef6d86-3b22-41cd-9a28-f4e7844ec25f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3ef6d86-3b22-41cd-9a28-f4e7844ec25f" (UID: "b3ef6d86-3b22-41cd-9a28-f4e7844ec25f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.010846 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3ef6d86-3b22-41cd-9a28-f4e7844ec25f-kube-api-access-lmfhx" (OuterVolumeSpecName: "kube-api-access-lmfhx") pod "b3ef6d86-3b22-41cd-9a28-f4e7844ec25f" (UID: "b3ef6d86-3b22-41cd-9a28-f4e7844ec25f"). InnerVolumeSpecName "kube-api-access-lmfhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.106598 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v97m\" (UniqueName: \"kubernetes.io/projected/94708a44-bcf8-4084-8ba8-8c1ffdcf70e4-kube-api-access-8v97m\") pod \"94708a44-bcf8-4084-8ba8-8c1ffdcf70e4\" (UID: \"94708a44-bcf8-4084-8ba8-8c1ffdcf70e4\") " Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.106682 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94708a44-bcf8-4084-8ba8-8c1ffdcf70e4-operator-scripts\") pod \"94708a44-bcf8-4084-8ba8-8c1ffdcf70e4\" (UID: \"94708a44-bcf8-4084-8ba8-8c1ffdcf70e4\") " Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.106754 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hzmq\" (UniqueName: \"kubernetes.io/projected/83e2ded5-4041-484c-b117-6df53876c328-kube-api-access-5hzmq\") pod \"83e2ded5-4041-484c-b117-6df53876c328\" (UID: \"83e2ded5-4041-484c-b117-6df53876c328\") " Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.106816 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83e2ded5-4041-484c-b117-6df53876c328-operator-scripts\") pod \"83e2ded5-4041-484c-b117-6df53876c328\" (UID: \"83e2ded5-4041-484c-b117-6df53876c328\") " Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.107203 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94708a44-bcf8-4084-8ba8-8c1ffdcf70e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94708a44-bcf8-4084-8ba8-8c1ffdcf70e4" (UID: "94708a44-bcf8-4084-8ba8-8c1ffdcf70e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.107665 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83e2ded5-4041-484c-b117-6df53876c328-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83e2ded5-4041-484c-b117-6df53876c328" (UID: "83e2ded5-4041-484c-b117-6df53876c328"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.107827 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmfhx\" (UniqueName: \"kubernetes.io/projected/b3ef6d86-3b22-41cd-9a28-f4e7844ec25f-kube-api-access-lmfhx\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.108090 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3ef6d86-3b22-41cd-9a28-f4e7844ec25f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.108107 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94708a44-bcf8-4084-8ba8-8c1ffdcf70e4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.110377 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83e2ded5-4041-484c-b117-6df53876c328-kube-api-access-5hzmq" (OuterVolumeSpecName: "kube-api-access-5hzmq") pod "83e2ded5-4041-484c-b117-6df53876c328" (UID: "83e2ded5-4041-484c-b117-6df53876c328"). InnerVolumeSpecName "kube-api-access-5hzmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.114517 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94708a44-bcf8-4084-8ba8-8c1ffdcf70e4-kube-api-access-8v97m" (OuterVolumeSpecName: "kube-api-access-8v97m") pod "94708a44-bcf8-4084-8ba8-8c1ffdcf70e4" (UID: "94708a44-bcf8-4084-8ba8-8c1ffdcf70e4"). InnerVolumeSpecName "kube-api-access-8v97m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.209487 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8v97m\" (UniqueName: \"kubernetes.io/projected/94708a44-bcf8-4084-8ba8-8c1ffdcf70e4-kube-api-access-8v97m\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.209514 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hzmq\" (UniqueName: \"kubernetes.io/projected/83e2ded5-4041-484c-b117-6df53876c328-kube-api-access-5hzmq\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.209524 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83e2ded5-4041-484c-b117-6df53876c328-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.498976 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hddnt" event={"ID":"8b69c9fa-305e-484f-98e7-c8928bec7a13","Type":"ContainerStarted","Data":"7a66a4ff70967109c7bf96da40641a693388b2c50635fd2a9cf1811b62615c0e"} Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.501310 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-a4fe-account-create-update-xbczb" event={"ID":"94708a44-bcf8-4084-8ba8-8c1ffdcf70e4","Type":"ContainerDied","Data":"0471faf849ca02683e7ddcba31ad67d5a6212290888bec76eb796d7e0e213c17"} Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.501357 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0471faf849ca02683e7ddcba31ad67d5a6212290888bec76eb796d7e0e213c17" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.501408 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-a4fe-account-create-update-xbczb" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.504796 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-c9pnc" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.504813 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-c9pnc" event={"ID":"83e2ded5-4041-484c-b117-6df53876c328","Type":"ContainerDied","Data":"31889bdf6f7fae00b134fd39ef49770a0227de6f267aaf7a093cc9782b053452"} Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.504862 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31889bdf6f7fae00b134fd39ef49770a0227de6f267aaf7a093cc9782b053452" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.507248 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9169-account-create-update-j44z4" event={"ID":"b3ef6d86-3b22-41cd-9a28-f4e7844ec25f","Type":"ContainerDied","Data":"fa201353027feff53e56c967e4b389ed4225a0e967d8186a7880508c3f2af112"} Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.507271 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa201353027feff53e56c967e4b389ed4225a0e967d8186a7880508c3f2af112" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.507297 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9169-account-create-update-j44z4" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.537142 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-hddnt" podStartSLOduration=6.466820402 podStartE2EDuration="11.537117801s" podCreationTimestamp="2026-02-23 10:23:43 +0000 UTC" firstStartedPulling="2026-02-23 10:23:48.610294656 +0000 UTC m=+1062.030668169" lastFinishedPulling="2026-02-23 10:23:53.680592055 +0000 UTC m=+1067.100965568" observedRunningTime="2026-02-23 10:23:54.525482891 +0000 UTC m=+1067.945856404" watchObservedRunningTime="2026-02-23 10:23:54.537117801 +0000 UTC m=+1067.957491314" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.623818 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-l4n6r"] Feb 23 10:23:54 crc kubenswrapper[4904]: E0223 10:23:54.624243 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73361e7e-1ced-438c-9d84-f425467c6717" containerName="init" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.624263 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="73361e7e-1ced-438c-9d84-f425467c6717" containerName="init" Feb 23 10:23:54 crc kubenswrapper[4904]: E0223 10:23:54.624292 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ef6d86-3b22-41cd-9a28-f4e7844ec25f" containerName="mariadb-account-create-update" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.624301 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ef6d86-3b22-41cd-9a28-f4e7844ec25f" containerName="mariadb-account-create-update" Feb 23 10:23:54 crc kubenswrapper[4904]: E0223 10:23:54.624319 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d" containerName="mariadb-account-create-update" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.624327 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d" containerName="mariadb-account-create-update" Feb 23 10:23:54 crc kubenswrapper[4904]: E0223 10:23:54.624341 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e2ded5-4041-484c-b117-6df53876c328" containerName="mariadb-database-create" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.624349 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e2ded5-4041-484c-b117-6df53876c328" containerName="mariadb-database-create" Feb 23 10:23:54 crc kubenswrapper[4904]: E0223 10:23:54.624365 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94708a44-bcf8-4084-8ba8-8c1ffdcf70e4" containerName="mariadb-account-create-update" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.624373 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="94708a44-bcf8-4084-8ba8-8c1ffdcf70e4" containerName="mariadb-account-create-update" Feb 23 10:23:54 crc kubenswrapper[4904]: E0223 10:23:54.624385 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73361e7e-1ced-438c-9d84-f425467c6717" containerName="dnsmasq-dns" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.624394 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="73361e7e-1ced-438c-9d84-f425467c6717" containerName="dnsmasq-dns" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.624582 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd3ac15-7ce7-4833-a9a4-df5aefacfc9d" containerName="mariadb-account-create-update" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.624597 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="94708a44-bcf8-4084-8ba8-8c1ffdcf70e4" containerName="mariadb-account-create-update" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.624610 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="73361e7e-1ced-438c-9d84-f425467c6717" containerName="dnsmasq-dns" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.624625 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ef6d86-3b22-41cd-9a28-f4e7844ec25f" containerName="mariadb-account-create-update" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.624640 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="83e2ded5-4041-484c-b117-6df53876c328" containerName="mariadb-database-create" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.625471 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l4n6r" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.628033 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.650175 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l4n6r"] Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.822027 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95cb32e0-df2b-453f-9734-d3e37f6aa5eb-operator-scripts\") pod \"root-account-create-update-l4n6r\" (UID: \"95cb32e0-df2b-453f-9734-d3e37f6aa5eb\") " pod="openstack/root-account-create-update-l4n6r" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.822112 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxh6q\" (UniqueName: \"kubernetes.io/projected/95cb32e0-df2b-453f-9734-d3e37f6aa5eb-kube-api-access-gxh6q\") pod \"root-account-create-update-l4n6r\" (UID: \"95cb32e0-df2b-453f-9734-d3e37f6aa5eb\") " pod="openstack/root-account-create-update-l4n6r" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.924426 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxh6q\" (UniqueName: \"kubernetes.io/projected/95cb32e0-df2b-453f-9734-d3e37f6aa5eb-kube-api-access-gxh6q\") pod \"root-account-create-update-l4n6r\" (UID: \"95cb32e0-df2b-453f-9734-d3e37f6aa5eb\") " pod="openstack/root-account-create-update-l4n6r" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.924687 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95cb32e0-df2b-453f-9734-d3e37f6aa5eb-operator-scripts\") pod \"root-account-create-update-l4n6r\" (UID: \"95cb32e0-df2b-453f-9734-d3e37f6aa5eb\") " pod="openstack/root-account-create-update-l4n6r" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.925572 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95cb32e0-df2b-453f-9734-d3e37f6aa5eb-operator-scripts\") pod \"root-account-create-update-l4n6r\" (UID: \"95cb32e0-df2b-453f-9734-d3e37f6aa5eb\") " pod="openstack/root-account-create-update-l4n6r" Feb 23 10:23:54 crc kubenswrapper[4904]: I0223 10:23:54.964671 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxh6q\" (UniqueName: \"kubernetes.io/projected/95cb32e0-df2b-453f-9734-d3e37f6aa5eb-kube-api-access-gxh6q\") pod \"root-account-create-update-l4n6r\" (UID: \"95cb32e0-df2b-453f-9734-d3e37f6aa5eb\") " pod="openstack/root-account-create-update-l4n6r" Feb 23 10:23:55 crc kubenswrapper[4904]: I0223 10:23:55.246509 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l4n6r" Feb 23 10:23:55 crc kubenswrapper[4904]: I0223 10:23:55.348325 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 23 10:23:55 crc kubenswrapper[4904]: I0223 10:23:55.646140 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/71f24a32-6e0a-4a39-9570-92c373672a9b-etc-swift\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") " pod="openstack/swift-storage-0" Feb 23 10:23:55 crc kubenswrapper[4904]: E0223 10:23:55.646399 4904 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 10:23:55 crc kubenswrapper[4904]: E0223 10:23:55.646887 4904 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 10:23:55 crc kubenswrapper[4904]: E0223 10:23:55.646971 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71f24a32-6e0a-4a39-9570-92c373672a9b-etc-swift podName:71f24a32-6e0a-4a39-9570-92c373672a9b nodeName:}" failed. No retries permitted until 2026-02-23 10:24:11.646942415 +0000 UTC m=+1085.067315948 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/71f24a32-6e0a-4a39-9570-92c373672a9b-etc-swift") pod "swift-storage-0" (UID: "71f24a32-6e0a-4a39-9570-92c373672a9b") : configmap "swift-ring-files" not found Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.035192 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-vj425"] Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.038395 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vj425" Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.057964 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f39f07b-ae9c-44dd-a512-2642f0a78b07-operator-scripts\") pod \"glance-db-create-vj425\" (UID: \"2f39f07b-ae9c-44dd-a512-2642f0a78b07\") " pod="openstack/glance-db-create-vj425" Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.058170 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whdbw\" (UniqueName: \"kubernetes.io/projected/2f39f07b-ae9c-44dd-a512-2642f0a78b07-kube-api-access-whdbw\") pod \"glance-db-create-vj425\" (UID: \"2f39f07b-ae9c-44dd-a512-2642f0a78b07\") " pod="openstack/glance-db-create-vj425" Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.058833 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vj425"] Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.112080 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9895-account-create-update-8k6j7"] Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.113304 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9895-account-create-update-8k6j7" Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.116199 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.129797 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9895-account-create-update-8k6j7"] Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.160127 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whdbw\" (UniqueName: \"kubernetes.io/projected/2f39f07b-ae9c-44dd-a512-2642f0a78b07-kube-api-access-whdbw\") pod \"glance-db-create-vj425\" (UID: \"2f39f07b-ae9c-44dd-a512-2642f0a78b07\") " pod="openstack/glance-db-create-vj425" Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.160290 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f39f07b-ae9c-44dd-a512-2642f0a78b07-operator-scripts\") pod \"glance-db-create-vj425\" (UID: \"2f39f07b-ae9c-44dd-a512-2642f0a78b07\") " pod="openstack/glance-db-create-vj425" Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.161826 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f39f07b-ae9c-44dd-a512-2642f0a78b07-operator-scripts\") pod \"glance-db-create-vj425\" (UID: \"2f39f07b-ae9c-44dd-a512-2642f0a78b07\") " pod="openstack/glance-db-create-vj425" Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.190009 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whdbw\" (UniqueName: \"kubernetes.io/projected/2f39f07b-ae9c-44dd-a512-2642f0a78b07-kube-api-access-whdbw\") pod \"glance-db-create-vj425\" (UID: \"2f39f07b-ae9c-44dd-a512-2642f0a78b07\") " pod="openstack/glance-db-create-vj425" Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.262497 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e9e4686-034e-4478-a278-f2ceb0516da7-operator-scripts\") pod \"glance-9895-account-create-update-8k6j7\" (UID: \"6e9e4686-034e-4478-a278-f2ceb0516da7\") " pod="openstack/glance-9895-account-create-update-8k6j7" Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.262589 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnw8v\" (UniqueName: \"kubernetes.io/projected/6e9e4686-034e-4478-a278-f2ceb0516da7-kube-api-access-gnw8v\") pod \"glance-9895-account-create-update-8k6j7\" (UID: \"6e9e4686-034e-4478-a278-f2ceb0516da7\") " pod="openstack/glance-9895-account-create-update-8k6j7" Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.347870 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l4n6r"] Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.364159 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e9e4686-034e-4478-a278-f2ceb0516da7-operator-scripts\") pod \"glance-9895-account-create-update-8k6j7\" (UID: \"6e9e4686-034e-4478-a278-f2ceb0516da7\") " pod="openstack/glance-9895-account-create-update-8k6j7" Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.364213 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnw8v\" (UniqueName: \"kubernetes.io/projected/6e9e4686-034e-4478-a278-f2ceb0516da7-kube-api-access-gnw8v\") pod \"glance-9895-account-create-update-8k6j7\" (UID: \"6e9e4686-034e-4478-a278-f2ceb0516da7\") " pod="openstack/glance-9895-account-create-update-8k6j7" Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.365860 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e9e4686-034e-4478-a278-f2ceb0516da7-operator-scripts\") pod \"glance-9895-account-create-update-8k6j7\" (UID: \"6e9e4686-034e-4478-a278-f2ceb0516da7\") " pod="openstack/glance-9895-account-create-update-8k6j7" Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.383312 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnw8v\" (UniqueName: \"kubernetes.io/projected/6e9e4686-034e-4478-a278-f2ceb0516da7-kube-api-access-gnw8v\") pod \"glance-9895-account-create-update-8k6j7\" (UID: \"6e9e4686-034e-4478-a278-f2ceb0516da7\") " pod="openstack/glance-9895-account-create-update-8k6j7" Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.412041 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vj425" Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.451881 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9895-account-create-update-8k6j7" Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.597279 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dd62fb2b-c564-4004-a886-f2d4bd1d3eda","Type":"ContainerStarted","Data":"6075bc51bbd052d7b2e2a46006473c9144cd0843961a8d6d84454fd68f8e8af9"} Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.609411 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l4n6r" event={"ID":"95cb32e0-df2b-453f-9734-d3e37f6aa5eb","Type":"ContainerStarted","Data":"3db63b22dedbccef9d7abb9e3db114b1e5c000f0a0f753511d957c6bfa63fde9"} Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.609819 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l4n6r" event={"ID":"95cb32e0-df2b-453f-9734-d3e37f6aa5eb","Type":"ContainerStarted","Data":"7411cee694de3ecbec10c3c78075b4ba7720a86a22358026f9eef6a65ef76bb9"} Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.643487 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=24.373713408 podStartE2EDuration="58.643457554s" podCreationTimestamp="2026-02-23 10:22:58 +0000 UTC" firstStartedPulling="2026-02-23 10:23:21.660819258 +0000 UTC m=+1035.081192761" lastFinishedPulling="2026-02-23 10:23:55.930563404 +0000 UTC m=+1069.350936907" observedRunningTime="2026-02-23 10:23:56.643174486 +0000 UTC m=+1070.063547999" watchObservedRunningTime="2026-02-23 10:23:56.643457554 +0000 UTC m=+1070.063831067" Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.679434 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-l4n6r" podStartSLOduration=2.679411634 podStartE2EDuration="2.679411634s" podCreationTimestamp="2026-02-23 10:23:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:23:56.677226952 +0000 UTC m=+1070.097600465" watchObservedRunningTime="2026-02-23 10:23:56.679411634 +0000 UTC m=+1070.099785147" Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.901043 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vj425"] Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.930496 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-wtd9h"] Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.933292 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wtd9h" Feb 23 10:23:56 crc kubenswrapper[4904]: I0223 10:23:56.941281 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wtd9h"] Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.021898 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e9fa-account-create-update-n2cwd"] Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.024058 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e9fa-account-create-update-n2cwd" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.035451 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.047687 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e9fa-account-create-update-n2cwd"] Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.090027 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40413731-ae44-4eca-a751-3ba3a4e42558-operator-scripts\") pod \"keystone-db-create-wtd9h\" (UID: \"40413731-ae44-4eca-a751-3ba3a4e42558\") " pod="openstack/keystone-db-create-wtd9h" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.090132 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fz6b\" (UniqueName: \"kubernetes.io/projected/40413731-ae44-4eca-a751-3ba3a4e42558-kube-api-access-7fz6b\") pod \"keystone-db-create-wtd9h\" (UID: \"40413731-ae44-4eca-a751-3ba3a4e42558\") " pod="openstack/keystone-db-create-wtd9h" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.191915 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fz6b\" (UniqueName: \"kubernetes.io/projected/40413731-ae44-4eca-a751-3ba3a4e42558-kube-api-access-7fz6b\") pod \"keystone-db-create-wtd9h\" (UID: \"40413731-ae44-4eca-a751-3ba3a4e42558\") " pod="openstack/keystone-db-create-wtd9h" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.192000 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfdl2\" (UniqueName: \"kubernetes.io/projected/c53aa525-6ef1-4984-803a-fc0466c201b3-kube-api-access-cfdl2\") pod \"keystone-e9fa-account-create-update-n2cwd\" (UID: \"c53aa525-6ef1-4984-803a-fc0466c201b3\") " pod="openstack/keystone-e9fa-account-create-update-n2cwd" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.192135 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c53aa525-6ef1-4984-803a-fc0466c201b3-operator-scripts\") pod \"keystone-e9fa-account-create-update-n2cwd\" (UID: \"c53aa525-6ef1-4984-803a-fc0466c201b3\") " pod="openstack/keystone-e9fa-account-create-update-n2cwd" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.192180 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40413731-ae44-4eca-a751-3ba3a4e42558-operator-scripts\") pod \"keystone-db-create-wtd9h\" (UID: \"40413731-ae44-4eca-a751-3ba3a4e42558\") " pod="openstack/keystone-db-create-wtd9h" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.193345 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40413731-ae44-4eca-a751-3ba3a4e42558-operator-scripts\") pod \"keystone-db-create-wtd9h\" (UID: \"40413731-ae44-4eca-a751-3ba3a4e42558\") " pod="openstack/keystone-db-create-wtd9h" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.209046 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9895-account-create-update-8k6j7"] Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.225333 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fz6b\" (UniqueName: \"kubernetes.io/projected/40413731-ae44-4eca-a751-3ba3a4e42558-kube-api-access-7fz6b\") pod \"keystone-db-create-wtd9h\" (UID: \"40413731-ae44-4eca-a751-3ba3a4e42558\") " pod="openstack/keystone-db-create-wtd9h" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.229516 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-727gm"] Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.234646 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-727gm" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.298009 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wtd9h" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.303222 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfdl2\" (UniqueName: \"kubernetes.io/projected/c53aa525-6ef1-4984-803a-fc0466c201b3-kube-api-access-cfdl2\") pod \"keystone-e9fa-account-create-update-n2cwd\" (UID: \"c53aa525-6ef1-4984-803a-fc0466c201b3\") " pod="openstack/keystone-e9fa-account-create-update-n2cwd" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.303376 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c53aa525-6ef1-4984-803a-fc0466c201b3-operator-scripts\") pod \"keystone-e9fa-account-create-update-n2cwd\" (UID: \"c53aa525-6ef1-4984-803a-fc0466c201b3\") " pod="openstack/keystone-e9fa-account-create-update-n2cwd" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.304318 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c53aa525-6ef1-4984-803a-fc0466c201b3-operator-scripts\") pod \"keystone-e9fa-account-create-update-n2cwd\" (UID: \"c53aa525-6ef1-4984-803a-fc0466c201b3\") " pod="openstack/keystone-e9fa-account-create-update-n2cwd" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.308090 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-727gm"] Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.326485 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfdl2\" (UniqueName: \"kubernetes.io/projected/c53aa525-6ef1-4984-803a-fc0466c201b3-kube-api-access-cfdl2\") pod \"keystone-e9fa-account-create-update-n2cwd\" (UID: \"c53aa525-6ef1-4984-803a-fc0466c201b3\") " pod="openstack/keystone-e9fa-account-create-update-n2cwd" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.384120 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e9fa-account-create-update-n2cwd" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.408167 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk6cb\" (UniqueName: \"kubernetes.io/projected/83019808-1db2-46ed-87bd-ef75476802e7-kube-api-access-kk6cb\") pod \"placement-db-create-727gm\" (UID: \"83019808-1db2-46ed-87bd-ef75476802e7\") " pod="openstack/placement-db-create-727gm" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.408292 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83019808-1db2-46ed-87bd-ef75476802e7-operator-scripts\") pod \"placement-db-create-727gm\" (UID: \"83019808-1db2-46ed-87bd-ef75476802e7\") " pod="openstack/placement-db-create-727gm" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.510561 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk6cb\" (UniqueName: \"kubernetes.io/projected/83019808-1db2-46ed-87bd-ef75476802e7-kube-api-access-kk6cb\") pod \"placement-db-create-727gm\" (UID: \"83019808-1db2-46ed-87bd-ef75476802e7\") " pod="openstack/placement-db-create-727gm" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.511603 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83019808-1db2-46ed-87bd-ef75476802e7-operator-scripts\") pod \"placement-db-create-727gm\" (UID: \"83019808-1db2-46ed-87bd-ef75476802e7\") " pod="openstack/placement-db-create-727gm" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.512461 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83019808-1db2-46ed-87bd-ef75476802e7-operator-scripts\") pod \"placement-db-create-727gm\" (UID: \"83019808-1db2-46ed-87bd-ef75476802e7\") " pod="openstack/placement-db-create-727gm" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.538301 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk6cb\" (UniqueName: \"kubernetes.io/projected/83019808-1db2-46ed-87bd-ef75476802e7-kube-api-access-kk6cb\") pod \"placement-db-create-727gm\" (UID: \"83019808-1db2-46ed-87bd-ef75476802e7\") " pod="openstack/placement-db-create-727gm" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.590638 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-727gm" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.637613 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vj425" event={"ID":"2f39f07b-ae9c-44dd-a512-2642f0a78b07","Type":"ContainerDied","Data":"a9a6a4a9976c5fefe8ed2595cf5486ade6e7661d1e4b27aa80b6557dc5f0c767"} Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.638743 4904 generic.go:334] "Generic (PLEG): container finished" podID="2f39f07b-ae9c-44dd-a512-2642f0a78b07" containerID="a9a6a4a9976c5fefe8ed2595cf5486ade6e7661d1e4b27aa80b6557dc5f0c767" exitCode=0 Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.638887 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vj425" event={"ID":"2f39f07b-ae9c-44dd-a512-2642f0a78b07","Type":"ContainerStarted","Data":"b0fddd035b91a0555942fba95d6aebd6e441551fdf1d17f56d28bcd957175e24"} Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.640827 4904 generic.go:334] "Generic (PLEG): container finished" podID="95cb32e0-df2b-453f-9734-d3e37f6aa5eb" containerID="3db63b22dedbccef9d7abb9e3db114b1e5c000f0a0f753511d957c6bfa63fde9" exitCode=0 Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.640893 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l4n6r" event={"ID":"95cb32e0-df2b-453f-9734-d3e37f6aa5eb","Type":"ContainerDied","Data":"3db63b22dedbccef9d7abb9e3db114b1e5c000f0a0f753511d957c6bfa63fde9"} Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.645391 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9895-account-create-update-8k6j7" event={"ID":"6e9e4686-034e-4478-a278-f2ceb0516da7","Type":"ContainerStarted","Data":"ea9bf472e6bda64a89f4c1bc320f60e50b66e475ed3b0c21c5851a1b869d8c0c"} Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.645421 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9895-account-create-update-8k6j7" event={"ID":"6e9e4686-034e-4478-a278-f2ceb0516da7","Type":"ContainerStarted","Data":"4dfff1f3560a930c0ed8cd76477e8ee66dd9abb8efa58139db007067ee0b4806"} Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.683905 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-9895-account-create-update-8k6j7" podStartSLOduration=1.683875187 podStartE2EDuration="1.683875187s" podCreationTimestamp="2026-02-23 10:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:23:57.674250854 +0000 UTC m=+1071.094624367" watchObservedRunningTime="2026-02-23 10:23:57.683875187 +0000 UTC m=+1071.104248700" Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.849947 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wtd9h"] Feb 23 10:23:57 crc kubenswrapper[4904]: I0223 10:23:57.942201 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e9fa-account-create-update-n2cwd"] Feb 23 10:23:58 crc kubenswrapper[4904]: I0223 10:23:58.092615 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-727gm"] Feb 23 10:23:58 crc kubenswrapper[4904]: I0223 10:23:58.654540 4904 generic.go:334] "Generic (PLEG): container finished" podID="83019808-1db2-46ed-87bd-ef75476802e7" containerID="dbaaa8f2caffa0897653f38fd70a874922d96d82228a8d631cb60e2abb18acac" exitCode=0 Feb 23 10:23:58 crc kubenswrapper[4904]: I0223 10:23:58.654679 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-727gm" event={"ID":"83019808-1db2-46ed-87bd-ef75476802e7","Type":"ContainerDied","Data":"dbaaa8f2caffa0897653f38fd70a874922d96d82228a8d631cb60e2abb18acac"} Feb 23 10:23:58 crc kubenswrapper[4904]: I0223 10:23:58.654954 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-727gm" event={"ID":"83019808-1db2-46ed-87bd-ef75476802e7","Type":"ContainerStarted","Data":"aa599bf52cb28481d0d4acbc27d088f8793258833adbc798058b75bd0d5c8c32"} Feb 23 10:23:58 crc kubenswrapper[4904]: I0223 10:23:58.656920 4904 generic.go:334] "Generic (PLEG): container finished" podID="c53aa525-6ef1-4984-803a-fc0466c201b3" containerID="81dc5f182913791237c02172c9322ccbc5479d5cfa8e4715a433a1b68eb5c1b2" exitCode=0 Feb 23 10:23:58 crc kubenswrapper[4904]: I0223 10:23:58.657134 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e9fa-account-create-update-n2cwd" event={"ID":"c53aa525-6ef1-4984-803a-fc0466c201b3","Type":"ContainerDied","Data":"81dc5f182913791237c02172c9322ccbc5479d5cfa8e4715a433a1b68eb5c1b2"} Feb 23 10:23:58 crc kubenswrapper[4904]: I0223 10:23:58.657314 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e9fa-account-create-update-n2cwd" event={"ID":"c53aa525-6ef1-4984-803a-fc0466c201b3","Type":"ContainerStarted","Data":"3a112c8a093ecafd6a612fd6c4103beabb7aef6f9a590833427e9ef5378cf7fe"} Feb 23 10:23:58 crc kubenswrapper[4904]: I0223 10:23:58.659493 4904 generic.go:334] "Generic (PLEG): container finished" podID="6e9e4686-034e-4478-a278-f2ceb0516da7" containerID="ea9bf472e6bda64a89f4c1bc320f60e50b66e475ed3b0c21c5851a1b869d8c0c" exitCode=0 Feb 23 10:23:58 crc kubenswrapper[4904]: I0223 10:23:58.659568 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9895-account-create-update-8k6j7" event={"ID":"6e9e4686-034e-4478-a278-f2ceb0516da7","Type":"ContainerDied","Data":"ea9bf472e6bda64a89f4c1bc320f60e50b66e475ed3b0c21c5851a1b869d8c0c"} Feb 23 10:23:58 crc kubenswrapper[4904]: I0223 10:23:58.663975 4904 generic.go:334] "Generic (PLEG): container finished" podID="40413731-ae44-4eca-a751-3ba3a4e42558" containerID="9b2ed722df78c25b0cd8691be0e3eb66b413f0f4c3c563f9053691c397dd82a2" exitCode=0 Feb 23 10:23:58 crc kubenswrapper[4904]: I0223 10:23:58.664073 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wtd9h" event={"ID":"40413731-ae44-4eca-a751-3ba3a4e42558","Type":"ContainerDied","Data":"9b2ed722df78c25b0cd8691be0e3eb66b413f0f4c3c563f9053691c397dd82a2"} Feb 23 10:23:58 crc kubenswrapper[4904]: I0223 10:23:58.664474 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wtd9h" event={"ID":"40413731-ae44-4eca-a751-3ba3a4e42558","Type":"ContainerStarted","Data":"62fbc2b3c62324507081ac7fa9433a7857e611129a5bb32a3bfec579a456c1e8"} Feb 23 10:23:59 crc kubenswrapper[4904]: I0223 10:23:59.142103 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vj425" Feb 23 10:23:59 crc kubenswrapper[4904]: I0223 10:23:59.243177 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l4n6r" Feb 23 10:23:59 crc kubenswrapper[4904]: I0223 10:23:59.270814 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whdbw\" (UniqueName: \"kubernetes.io/projected/2f39f07b-ae9c-44dd-a512-2642f0a78b07-kube-api-access-whdbw\") pod \"2f39f07b-ae9c-44dd-a512-2642f0a78b07\" (UID: \"2f39f07b-ae9c-44dd-a512-2642f0a78b07\") " Feb 23 10:23:59 crc kubenswrapper[4904]: I0223 10:23:59.271019 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f39f07b-ae9c-44dd-a512-2642f0a78b07-operator-scripts\") pod \"2f39f07b-ae9c-44dd-a512-2642f0a78b07\" (UID: \"2f39f07b-ae9c-44dd-a512-2642f0a78b07\") " Feb 23 10:23:59 crc kubenswrapper[4904]: I0223 10:23:59.273410 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f39f07b-ae9c-44dd-a512-2642f0a78b07-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f39f07b-ae9c-44dd-a512-2642f0a78b07" (UID: "2f39f07b-ae9c-44dd-a512-2642f0a78b07"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:23:59 crc kubenswrapper[4904]: I0223 10:23:59.283003 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f39f07b-ae9c-44dd-a512-2642f0a78b07-kube-api-access-whdbw" (OuterVolumeSpecName: "kube-api-access-whdbw") pod "2f39f07b-ae9c-44dd-a512-2642f0a78b07" (UID: "2f39f07b-ae9c-44dd-a512-2642f0a78b07"). InnerVolumeSpecName "kube-api-access-whdbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:23:59 crc kubenswrapper[4904]: I0223 10:23:59.373386 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95cb32e0-df2b-453f-9734-d3e37f6aa5eb-operator-scripts\") pod \"95cb32e0-df2b-453f-9734-d3e37f6aa5eb\" (UID: \"95cb32e0-df2b-453f-9734-d3e37f6aa5eb\") " Feb 23 10:23:59 crc kubenswrapper[4904]: I0223 10:23:59.373550 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxh6q\" (UniqueName: \"kubernetes.io/projected/95cb32e0-df2b-453f-9734-d3e37f6aa5eb-kube-api-access-gxh6q\") pod \"95cb32e0-df2b-453f-9734-d3e37f6aa5eb\" (UID: \"95cb32e0-df2b-453f-9734-d3e37f6aa5eb\") " Feb 23 10:23:59 crc kubenswrapper[4904]: I0223 10:23:59.374221 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95cb32e0-df2b-453f-9734-d3e37f6aa5eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95cb32e0-df2b-453f-9734-d3e37f6aa5eb" (UID: "95cb32e0-df2b-453f-9734-d3e37f6aa5eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:23:59 crc kubenswrapper[4904]: I0223 10:23:59.374487 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whdbw\" (UniqueName: \"kubernetes.io/projected/2f39f07b-ae9c-44dd-a512-2642f0a78b07-kube-api-access-whdbw\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:59 crc kubenswrapper[4904]: I0223 10:23:59.374499 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95cb32e0-df2b-453f-9734-d3e37f6aa5eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:59 crc kubenswrapper[4904]: I0223 10:23:59.374511 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f39f07b-ae9c-44dd-a512-2642f0a78b07-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:59 crc kubenswrapper[4904]: I0223 10:23:59.389253 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95cb32e0-df2b-453f-9734-d3e37f6aa5eb-kube-api-access-gxh6q" (OuterVolumeSpecName: "kube-api-access-gxh6q") pod "95cb32e0-df2b-453f-9734-d3e37f6aa5eb" (UID: "95cb32e0-df2b-453f-9734-d3e37f6aa5eb"). InnerVolumeSpecName "kube-api-access-gxh6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:23:59 crc kubenswrapper[4904]: I0223 10:23:59.477184 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxh6q\" (UniqueName: \"kubernetes.io/projected/95cb32e0-df2b-453f-9734-d3e37f6aa5eb-kube-api-access-gxh6q\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:59 crc kubenswrapper[4904]: I0223 10:23:59.676067 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l4n6r" event={"ID":"95cb32e0-df2b-453f-9734-d3e37f6aa5eb","Type":"ContainerDied","Data":"7411cee694de3ecbec10c3c78075b4ba7720a86a22358026f9eef6a65ef76bb9"} Feb 23 10:23:59 crc kubenswrapper[4904]: I0223 10:23:59.676170 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7411cee694de3ecbec10c3c78075b4ba7720a86a22358026f9eef6a65ef76bb9" Feb 23 10:23:59 crc kubenswrapper[4904]: I0223 10:23:59.676105 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l4n6r" Feb 23 10:23:59 crc kubenswrapper[4904]: I0223 10:23:59.678020 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vj425" event={"ID":"2f39f07b-ae9c-44dd-a512-2642f0a78b07","Type":"ContainerDied","Data":"b0fddd035b91a0555942fba95d6aebd6e441551fdf1d17f56d28bcd957175e24"} Feb 23 10:23:59 crc kubenswrapper[4904]: I0223 10:23:59.678095 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0fddd035b91a0555942fba95d6aebd6e441551fdf1d17f56d28bcd957175e24" Feb 23 10:23:59 crc kubenswrapper[4904]: I0223 10:23:59.678120 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vj425" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.079885 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-727gm" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.196579 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kk6cb\" (UniqueName: \"kubernetes.io/projected/83019808-1db2-46ed-87bd-ef75476802e7-kube-api-access-kk6cb\") pod \"83019808-1db2-46ed-87bd-ef75476802e7\" (UID: \"83019808-1db2-46ed-87bd-ef75476802e7\") " Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.196792 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83019808-1db2-46ed-87bd-ef75476802e7-operator-scripts\") pod \"83019808-1db2-46ed-87bd-ef75476802e7\" (UID: \"83019808-1db2-46ed-87bd-ef75476802e7\") " Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.198227 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83019808-1db2-46ed-87bd-ef75476802e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83019808-1db2-46ed-87bd-ef75476802e7" (UID: "83019808-1db2-46ed-87bd-ef75476802e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.205635 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83019808-1db2-46ed-87bd-ef75476802e7-kube-api-access-kk6cb" (OuterVolumeSpecName: "kube-api-access-kk6cb") pod "83019808-1db2-46ed-87bd-ef75476802e7" (UID: "83019808-1db2-46ed-87bd-ef75476802e7"). InnerVolumeSpecName "kube-api-access-kk6cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.207845 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.207926 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.216709 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.299844 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83019808-1db2-46ed-87bd-ef75476802e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.299886 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kk6cb\" (UniqueName: \"kubernetes.io/projected/83019808-1db2-46ed-87bd-ef75476802e7-kube-api-access-kk6cb\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.307188 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9895-account-create-update-8k6j7" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.311881 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wtd9h" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.318115 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e9fa-account-create-update-n2cwd" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.503630 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnw8v\" (UniqueName: \"kubernetes.io/projected/6e9e4686-034e-4478-a278-f2ceb0516da7-kube-api-access-gnw8v\") pod \"6e9e4686-034e-4478-a278-f2ceb0516da7\" (UID: \"6e9e4686-034e-4478-a278-f2ceb0516da7\") " Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.503847 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfdl2\" (UniqueName: \"kubernetes.io/projected/c53aa525-6ef1-4984-803a-fc0466c201b3-kube-api-access-cfdl2\") pod \"c53aa525-6ef1-4984-803a-fc0466c201b3\" (UID: \"c53aa525-6ef1-4984-803a-fc0466c201b3\") " Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.503968 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e9e4686-034e-4478-a278-f2ceb0516da7-operator-scripts\") pod \"6e9e4686-034e-4478-a278-f2ceb0516da7\" (UID: \"6e9e4686-034e-4478-a278-f2ceb0516da7\") " Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.504103 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fz6b\" (UniqueName: \"kubernetes.io/projected/40413731-ae44-4eca-a751-3ba3a4e42558-kube-api-access-7fz6b\") pod \"40413731-ae44-4eca-a751-3ba3a4e42558\" (UID: \"40413731-ae44-4eca-a751-3ba3a4e42558\") " Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.504277 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c53aa525-6ef1-4984-803a-fc0466c201b3-operator-scripts\") pod \"c53aa525-6ef1-4984-803a-fc0466c201b3\" (UID: \"c53aa525-6ef1-4984-803a-fc0466c201b3\") " Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.504318 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40413731-ae44-4eca-a751-3ba3a4e42558-operator-scripts\") pod \"40413731-ae44-4eca-a751-3ba3a4e42558\" (UID: \"40413731-ae44-4eca-a751-3ba3a4e42558\") " Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.504667 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e9e4686-034e-4478-a278-f2ceb0516da7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6e9e4686-034e-4478-a278-f2ceb0516da7" (UID: "6e9e4686-034e-4478-a278-f2ceb0516da7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.504735 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c53aa525-6ef1-4984-803a-fc0466c201b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c53aa525-6ef1-4984-803a-fc0466c201b3" (UID: "c53aa525-6ef1-4984-803a-fc0466c201b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.505342 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40413731-ae44-4eca-a751-3ba3a4e42558-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "40413731-ae44-4eca-a751-3ba3a4e42558" (UID: "40413731-ae44-4eca-a751-3ba3a4e42558"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.505517 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c53aa525-6ef1-4984-803a-fc0466c201b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.505548 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40413731-ae44-4eca-a751-3ba3a4e42558-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.505561 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e9e4686-034e-4478-a278-f2ceb0516da7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.508313 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40413731-ae44-4eca-a751-3ba3a4e42558-kube-api-access-7fz6b" (OuterVolumeSpecName: "kube-api-access-7fz6b") pod "40413731-ae44-4eca-a751-3ba3a4e42558" (UID: "40413731-ae44-4eca-a751-3ba3a4e42558"). InnerVolumeSpecName "kube-api-access-7fz6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.508349 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e9e4686-034e-4478-a278-f2ceb0516da7-kube-api-access-gnw8v" (OuterVolumeSpecName: "kube-api-access-gnw8v") pod "6e9e4686-034e-4478-a278-f2ceb0516da7" (UID: "6e9e4686-034e-4478-a278-f2ceb0516da7"). InnerVolumeSpecName "kube-api-access-gnw8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.508949 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c53aa525-6ef1-4984-803a-fc0466c201b3-kube-api-access-cfdl2" (OuterVolumeSpecName: "kube-api-access-cfdl2") pod "c53aa525-6ef1-4984-803a-fc0466c201b3" (UID: "c53aa525-6ef1-4984-803a-fc0466c201b3"). InnerVolumeSpecName "kube-api-access-cfdl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.607501 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fz6b\" (UniqueName: \"kubernetes.io/projected/40413731-ae44-4eca-a751-3ba3a4e42558-kube-api-access-7fz6b\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.607538 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnw8v\" (UniqueName: \"kubernetes.io/projected/6e9e4686-034e-4478-a278-f2ceb0516da7-kube-api-access-gnw8v\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.607550 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfdl2\" (UniqueName: \"kubernetes.io/projected/c53aa525-6ef1-4984-803a-fc0466c201b3-kube-api-access-cfdl2\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.688575 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e9fa-account-create-update-n2cwd" event={"ID":"c53aa525-6ef1-4984-803a-fc0466c201b3","Type":"ContainerDied","Data":"3a112c8a093ecafd6a612fd6c4103beabb7aef6f9a590833427e9ef5378cf7fe"} Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.688641 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a112c8a093ecafd6a612fd6c4103beabb7aef6f9a590833427e9ef5378cf7fe" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.688749 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e9fa-account-create-update-n2cwd" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.691915 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9895-account-create-update-8k6j7" event={"ID":"6e9e4686-034e-4478-a278-f2ceb0516da7","Type":"ContainerDied","Data":"4dfff1f3560a930c0ed8cd76477e8ee66dd9abb8efa58139db007067ee0b4806"} Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.692018 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dfff1f3560a930c0ed8cd76477e8ee66dd9abb8efa58139db007067ee0b4806" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.691955 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9895-account-create-update-8k6j7" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.693618 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wtd9h" event={"ID":"40413731-ae44-4eca-a751-3ba3a4e42558","Type":"ContainerDied","Data":"62fbc2b3c62324507081ac7fa9433a7857e611129a5bb32a3bfec579a456c1e8"} Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.693667 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wtd9h" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.693668 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62fbc2b3c62324507081ac7fa9433a7857e611129a5bb32a3bfec579a456c1e8" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.695332 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-727gm" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.695326 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-727gm" event={"ID":"83019808-1db2-46ed-87bd-ef75476802e7","Type":"ContainerDied","Data":"aa599bf52cb28481d0d4acbc27d088f8793258833adbc798058b75bd0d5c8c32"} Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.695378 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa599bf52cb28481d0d4acbc27d088f8793258833adbc798058b75bd0d5c8c32" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.697031 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:00 crc kubenswrapper[4904]: I0223 10:24:00.832888 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-q7rxf" podUID="717c8a73-d7f4-48d3-920d-f573f4f9dc9b" containerName="ovn-controller" probeResult="failure" output=< Feb 23 10:24:00 crc kubenswrapper[4904]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 23 10:24:00 crc kubenswrapper[4904]: > Feb 23 10:24:02 crc kubenswrapper[4904]: I0223 10:24:02.719945 4904 generic.go:334] "Generic (PLEG): container finished" podID="e626c7f2-db46-4757-bd05-eedfba7b5fc8" containerID="465ea43cc0d67644504723e43babe568f422077a27d1ea2c430281933c762954" exitCode=0 Feb 23 10:24:02 crc kubenswrapper[4904]: I0223 10:24:02.720360 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e626c7f2-db46-4757-bd05-eedfba7b5fc8","Type":"ContainerDied","Data":"465ea43cc0d67644504723e43babe568f422077a27d1ea2c430281933c762954"} Feb 23 10:24:02 crc kubenswrapper[4904]: I0223 10:24:02.735553 4904 generic.go:334] "Generic (PLEG): container finished" podID="8b69c9fa-305e-484f-98e7-c8928bec7a13" containerID="7a66a4ff70967109c7bf96da40641a693388b2c50635fd2a9cf1811b62615c0e" exitCode=0 Feb 23 10:24:02 crc kubenswrapper[4904]: I0223 10:24:02.735634 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hddnt" event={"ID":"8b69c9fa-305e-484f-98e7-c8928bec7a13","Type":"ContainerDied","Data":"7a66a4ff70967109c7bf96da40641a693388b2c50635fd2a9cf1811b62615c0e"} Feb 23 10:24:02 crc kubenswrapper[4904]: E0223 10:24:02.793899 4904 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b69c9fa_305e_484f_98e7_c8928bec7a13.slice/crio-conmon-7a66a4ff70967109c7bf96da40641a693388b2c50635fd2a9cf1811b62615c0e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b69c9fa_305e_484f_98e7_c8928bec7a13.slice/crio-7a66a4ff70967109c7bf96da40641a693388b2c50635fd2a9cf1811b62615c0e.scope\": RecentStats: unable to find data in memory cache]" Feb 23 10:24:03 crc kubenswrapper[4904]: I0223 10:24:03.069771 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-l4n6r"] Feb 23 10:24:03 crc kubenswrapper[4904]: I0223 10:24:03.078579 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-l4n6r"] Feb 23 10:24:03 crc kubenswrapper[4904]: I0223 10:24:03.264946 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95cb32e0-df2b-453f-9734-d3e37f6aa5eb" path="/var/lib/kubelet/pods/95cb32e0-df2b-453f-9734-d3e37f6aa5eb/volumes" Feb 23 10:24:03 crc kubenswrapper[4904]: I0223 10:24:03.747969 4904 generic.go:334] "Generic (PLEG): container finished" podID="670153e4-0ac6-4ae8-ab14-08a3f2537c6c" containerID="157af569ff1c401c88a51da75639bbf328ab074d17dda426a1cb24639506a652" exitCode=0 Feb 23 10:24:03 crc kubenswrapper[4904]: I0223 10:24:03.748092 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"670153e4-0ac6-4ae8-ab14-08a3f2537c6c","Type":"ContainerDied","Data":"157af569ff1c401c88a51da75639bbf328ab074d17dda426a1cb24639506a652"} Feb 23 10:24:03 crc kubenswrapper[4904]: I0223 10:24:03.752262 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e626c7f2-db46-4757-bd05-eedfba7b5fc8","Type":"ContainerStarted","Data":"efebd95eafc9c0b73a2d7e20abcfc5347c62e16393ceeb0a2f2d64cf8feccba2"} Feb 23 10:24:03 crc kubenswrapper[4904]: I0223 10:24:03.752595 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:24:03 crc kubenswrapper[4904]: I0223 10:24:03.817025 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371964.037771 podStartE2EDuration="1m12.817005288s" podCreationTimestamp="2026-02-23 10:22:51 +0000 UTC" firstStartedPulling="2026-02-23 10:22:53.939759626 +0000 UTC m=+1007.360133139" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:24:03.811861572 +0000 UTC m=+1077.232235085" watchObservedRunningTime="2026-02-23 10:24:03.817005288 +0000 UTC m=+1077.237378801" Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.072393 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.073238 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="dd62fb2b-c564-4004-a886-f2d4bd1d3eda" containerName="prometheus" containerID="cri-o://204774cf3c338a14fa8418d4b7dabbbc8f8086735d48069849b565df6f421a3b" gracePeriod=600 Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.073450 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="dd62fb2b-c564-4004-a886-f2d4bd1d3eda" containerName="thanos-sidecar" containerID="cri-o://6075bc51bbd052d7b2e2a46006473c9144cd0843961a8d6d84454fd68f8e8af9" gracePeriod=600 Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.073532 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="dd62fb2b-c564-4004-a886-f2d4bd1d3eda" containerName="config-reloader" containerID="cri-o://642019c9cbd02857f05fb0712d9b926595677146ad53618d42edceb8b025502a" gracePeriod=600 Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.247792 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.319813 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b69c9fa-305e-484f-98e7-c8928bec7a13-scripts\") pod \"8b69c9fa-305e-484f-98e7-c8928bec7a13\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.319944 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b69c9fa-305e-484f-98e7-c8928bec7a13-combined-ca-bundle\") pod \"8b69c9fa-305e-484f-98e7-c8928bec7a13\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.320009 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnq2k\" (UniqueName: \"kubernetes.io/projected/8b69c9fa-305e-484f-98e7-c8928bec7a13-kube-api-access-nnq2k\") pod \"8b69c9fa-305e-484f-98e7-c8928bec7a13\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.320037 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8b69c9fa-305e-484f-98e7-c8928bec7a13-etc-swift\") pod \"8b69c9fa-305e-484f-98e7-c8928bec7a13\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.320116 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8b69c9fa-305e-484f-98e7-c8928bec7a13-ring-data-devices\") pod \"8b69c9fa-305e-484f-98e7-c8928bec7a13\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.320185 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8b69c9fa-305e-484f-98e7-c8928bec7a13-swiftconf\") pod \"8b69c9fa-305e-484f-98e7-c8928bec7a13\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.320290 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8b69c9fa-305e-484f-98e7-c8928bec7a13-dispersionconf\") pod \"8b69c9fa-305e-484f-98e7-c8928bec7a13\" (UID: \"8b69c9fa-305e-484f-98e7-c8928bec7a13\") " Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.321020 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b69c9fa-305e-484f-98e7-c8928bec7a13-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "8b69c9fa-305e-484f-98e7-c8928bec7a13" (UID: "8b69c9fa-305e-484f-98e7-c8928bec7a13"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.321937 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b69c9fa-305e-484f-98e7-c8928bec7a13-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8b69c9fa-305e-484f-98e7-c8928bec7a13" (UID: "8b69c9fa-305e-484f-98e7-c8928bec7a13"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.332208 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b69c9fa-305e-484f-98e7-c8928bec7a13-kube-api-access-nnq2k" (OuterVolumeSpecName: "kube-api-access-nnq2k") pod "8b69c9fa-305e-484f-98e7-c8928bec7a13" (UID: "8b69c9fa-305e-484f-98e7-c8928bec7a13"). InnerVolumeSpecName "kube-api-access-nnq2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.345993 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b69c9fa-305e-484f-98e7-c8928bec7a13-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "8b69c9fa-305e-484f-98e7-c8928bec7a13" (UID: "8b69c9fa-305e-484f-98e7-c8928bec7a13"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.350201 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b69c9fa-305e-484f-98e7-c8928bec7a13-scripts" (OuterVolumeSpecName: "scripts") pod "8b69c9fa-305e-484f-98e7-c8928bec7a13" (UID: "8b69c9fa-305e-484f-98e7-c8928bec7a13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.363037 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b69c9fa-305e-484f-98e7-c8928bec7a13-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "8b69c9fa-305e-484f-98e7-c8928bec7a13" (UID: "8b69c9fa-305e-484f-98e7-c8928bec7a13"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.374000 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b69c9fa-305e-484f-98e7-c8928bec7a13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b69c9fa-305e-484f-98e7-c8928bec7a13" (UID: "8b69c9fa-305e-484f-98e7-c8928bec7a13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.424071 4904 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8b69c9fa-305e-484f-98e7-c8928bec7a13-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.424103 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b69c9fa-305e-484f-98e7-c8928bec7a13-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.424114 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b69c9fa-305e-484f-98e7-c8928bec7a13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.424124 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnq2k\" (UniqueName: \"kubernetes.io/projected/8b69c9fa-305e-484f-98e7-c8928bec7a13-kube-api-access-nnq2k\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.424136 4904 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8b69c9fa-305e-484f-98e7-c8928bec7a13-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.424146 4904 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8b69c9fa-305e-484f-98e7-c8928bec7a13-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.424158 4904 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8b69c9fa-305e-484f-98e7-c8928bec7a13-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.776615 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hddnt" Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.776619 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hddnt" event={"ID":"8b69c9fa-305e-484f-98e7-c8928bec7a13","Type":"ContainerDied","Data":"114ddb7efa65ac480244b7d1f1a2ccf483d592cecb8f1b88414bcdec1085bb6c"} Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.777284 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="114ddb7efa65ac480244b7d1f1a2ccf483d592cecb8f1b88414bcdec1085bb6c" Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.794649 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"670153e4-0ac6-4ae8-ab14-08a3f2537c6c","Type":"ContainerStarted","Data":"2331cfff666a7b7cb9c388cc12b784999c7f50d807279a12dc2a25f1d62ba33f"} Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.817867 4904 generic.go:334] "Generic (PLEG): container finished" podID="dd62fb2b-c564-4004-a886-f2d4bd1d3eda" containerID="6075bc51bbd052d7b2e2a46006473c9144cd0843961a8d6d84454fd68f8e8af9" exitCode=0 Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.817918 4904 generic.go:334] "Generic (PLEG): container finished" podID="dd62fb2b-c564-4004-a886-f2d4bd1d3eda" containerID="642019c9cbd02857f05fb0712d9b926595677146ad53618d42edceb8b025502a" exitCode=0 Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.817931 4904 generic.go:334] "Generic (PLEG): container finished" podID="dd62fb2b-c564-4004-a886-f2d4bd1d3eda" containerID="204774cf3c338a14fa8418d4b7dabbbc8f8086735d48069849b565df6f421a3b" exitCode=0 Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.817944 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dd62fb2b-c564-4004-a886-f2d4bd1d3eda","Type":"ContainerDied","Data":"6075bc51bbd052d7b2e2a46006473c9144cd0843961a8d6d84454fd68f8e8af9"} Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.818008 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dd62fb2b-c564-4004-a886-f2d4bd1d3eda","Type":"ContainerDied","Data":"642019c9cbd02857f05fb0712d9b926595677146ad53618d42edceb8b025502a"} Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.818024 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dd62fb2b-c564-4004-a886-f2d4bd1d3eda","Type":"ContainerDied","Data":"204774cf3c338a14fa8418d4b7dabbbc8f8086735d48069849b565df6f421a3b"} Feb 23 10:24:04 crc kubenswrapper[4904]: I0223 10:24:04.833296 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.63329127 podStartE2EDuration="1m13.833277236s" podCreationTimestamp="2026-02-23 10:22:51 +0000 UTC" firstStartedPulling="2026-02-23 10:22:53.765090009 +0000 UTC m=+1007.185463522" lastFinishedPulling="2026-02-23 10:23:27.965075975 +0000 UTC m=+1041.385449488" observedRunningTime="2026-02-23 10:24:04.828585363 +0000 UTC m=+1078.248958876" watchObservedRunningTime="2026-02-23 10:24:04.833277236 +0000 UTC m=+1078.253650749" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.133596 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.238798 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-prometheus-metric-storage-rulefiles-1\") pod \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.238868 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-prometheus-metric-storage-rulefiles-0\") pod \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.238907 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjk8q\" (UniqueName: \"kubernetes.io/projected/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-kube-api-access-tjk8q\") pod \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.238949 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-prometheus-metric-storage-rulefiles-2\") pod \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.239025 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-config\") pod \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.239254 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\") pod \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.239300 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-config-out\") pod \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.239372 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-thanos-prometheus-http-client-file\") pod \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.239421 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-tls-assets\") pod \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.239468 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-web-config\") pod \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\" (UID: \"dd62fb2b-c564-4004-a886-f2d4bd1d3eda\") " Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.239672 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "dd62fb2b-c564-4004-a886-f2d4bd1d3eda" (UID: "dd62fb2b-c564-4004-a886-f2d4bd1d3eda"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.239683 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "dd62fb2b-c564-4004-a886-f2d4bd1d3eda" (UID: "dd62fb2b-c564-4004-a886-f2d4bd1d3eda"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.239927 4904 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.239946 4904 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.240513 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "dd62fb2b-c564-4004-a886-f2d4bd1d3eda" (UID: "dd62fb2b-c564-4004-a886-f2d4bd1d3eda"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.244989 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "dd62fb2b-c564-4004-a886-f2d4bd1d3eda" (UID: "dd62fb2b-c564-4004-a886-f2d4bd1d3eda"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.245210 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-config" (OuterVolumeSpecName: "config") pod "dd62fb2b-c564-4004-a886-f2d4bd1d3eda" (UID: "dd62fb2b-c564-4004-a886-f2d4bd1d3eda"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.245488 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-config-out" (OuterVolumeSpecName: "config-out") pod "dd62fb2b-c564-4004-a886-f2d4bd1d3eda" (UID: "dd62fb2b-c564-4004-a886-f2d4bd1d3eda"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.246999 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-kube-api-access-tjk8q" (OuterVolumeSpecName: "kube-api-access-tjk8q") pod "dd62fb2b-c564-4004-a886-f2d4bd1d3eda" (UID: "dd62fb2b-c564-4004-a886-f2d4bd1d3eda"). InnerVolumeSpecName "kube-api-access-tjk8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.247229 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "dd62fb2b-c564-4004-a886-f2d4bd1d3eda" (UID: "dd62fb2b-c564-4004-a886-f2d4bd1d3eda"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.263670 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0698fc29-13b4-43aa-8032-06d87cf4b025" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "dd62fb2b-c564-4004-a886-f2d4bd1d3eda" (UID: "dd62fb2b-c564-4004-a886-f2d4bd1d3eda"). InnerVolumeSpecName "pvc-0698fc29-13b4-43aa-8032-06d87cf4b025". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.268208 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-web-config" (OuterVolumeSpecName: "web-config") pod "dd62fb2b-c564-4004-a886-f2d4bd1d3eda" (UID: "dd62fb2b-c564-4004-a886-f2d4bd1d3eda"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.341868 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.341947 4904 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\") on node \"crc\" " Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.341964 4904 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-config-out\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.341979 4904 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.341989 4904 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.342002 4904 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-web-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.342011 4904 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.342021 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjk8q\" (UniqueName: \"kubernetes.io/projected/dd62fb2b-c564-4004-a886-f2d4bd1d3eda-kube-api-access-tjk8q\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.365203 4904 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.365371 4904 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0698fc29-13b4-43aa-8032-06d87cf4b025" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0698fc29-13b4-43aa-8032-06d87cf4b025") on node "crc" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.443983 4904 reconciler_common.go:293] "Volume detached for volume \"pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.826875 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-q7rxf" podUID="717c8a73-d7f4-48d3-920d-f573f4f9dc9b" containerName="ovn-controller" probeResult="failure" output=< Feb 23 10:24:05 crc kubenswrapper[4904]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 23 10:24:05 crc kubenswrapper[4904]: > Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.831643 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dd62fb2b-c564-4004-a886-f2d4bd1d3eda","Type":"ContainerDied","Data":"ab1381383d496ec9fd8c64802212f400388d5cd3a235fdd47cdceba228c52577"} Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.831683 4904 scope.go:117] "RemoveContainer" containerID="6075bc51bbd052d7b2e2a46006473c9144cd0843961a8d6d84454fd68f8e8af9" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.832023 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.859756 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.863121 4904 scope.go:117] "RemoveContainer" containerID="642019c9cbd02857f05fb0712d9b926595677146ad53618d42edceb8b025502a" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.871468 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.885307 4904 scope.go:117] "RemoveContainer" containerID="204774cf3c338a14fa8418d4b7dabbbc8f8086735d48069849b565df6f421a3b" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.896227 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 10:24:05 crc kubenswrapper[4904]: E0223 10:24:05.896589 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd62fb2b-c564-4004-a886-f2d4bd1d3eda" containerName="config-reloader" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.896608 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd62fb2b-c564-4004-a886-f2d4bd1d3eda" containerName="config-reloader" Feb 23 10:24:05 crc kubenswrapper[4904]: E0223 10:24:05.896619 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd62fb2b-c564-4004-a886-f2d4bd1d3eda" containerName="init-config-reloader" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.896628 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd62fb2b-c564-4004-a886-f2d4bd1d3eda" containerName="init-config-reloader" Feb 23 10:24:05 crc kubenswrapper[4904]: E0223 10:24:05.896642 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40413731-ae44-4eca-a751-3ba3a4e42558" containerName="mariadb-database-create" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.896648 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="40413731-ae44-4eca-a751-3ba3a4e42558" containerName="mariadb-database-create" Feb 23 10:24:05 crc kubenswrapper[4904]: E0223 10:24:05.896664 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9e4686-034e-4478-a278-f2ceb0516da7" containerName="mariadb-account-create-update" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.896670 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9e4686-034e-4478-a278-f2ceb0516da7" containerName="mariadb-account-create-update" Feb 23 10:24:05 crc kubenswrapper[4904]: E0223 10:24:05.896679 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c53aa525-6ef1-4984-803a-fc0466c201b3" containerName="mariadb-account-create-update" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.896685 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c53aa525-6ef1-4984-803a-fc0466c201b3" containerName="mariadb-account-create-update" Feb 23 10:24:05 crc kubenswrapper[4904]: E0223 10:24:05.896695 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83019808-1db2-46ed-87bd-ef75476802e7" containerName="mariadb-database-create" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.896701 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="83019808-1db2-46ed-87bd-ef75476802e7" containerName="mariadb-database-create" Feb 23 10:24:05 crc kubenswrapper[4904]: E0223 10:24:05.896747 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b69c9fa-305e-484f-98e7-c8928bec7a13" containerName="swift-ring-rebalance" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.896754 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b69c9fa-305e-484f-98e7-c8928bec7a13" containerName="swift-ring-rebalance" Feb 23 10:24:05 crc kubenswrapper[4904]: E0223 10:24:05.896765 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f39f07b-ae9c-44dd-a512-2642f0a78b07" containerName="mariadb-database-create" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.896772 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f39f07b-ae9c-44dd-a512-2642f0a78b07" containerName="mariadb-database-create" Feb 23 10:24:05 crc kubenswrapper[4904]: E0223 10:24:05.896783 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd62fb2b-c564-4004-a886-f2d4bd1d3eda" containerName="thanos-sidecar" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.896789 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd62fb2b-c564-4004-a886-f2d4bd1d3eda" containerName="thanos-sidecar" Feb 23 10:24:05 crc kubenswrapper[4904]: E0223 10:24:05.896797 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95cb32e0-df2b-453f-9734-d3e37f6aa5eb" containerName="mariadb-account-create-update" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.896804 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="95cb32e0-df2b-453f-9734-d3e37f6aa5eb" containerName="mariadb-account-create-update" Feb 23 10:24:05 crc kubenswrapper[4904]: E0223 10:24:05.896817 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd62fb2b-c564-4004-a886-f2d4bd1d3eda" containerName="prometheus" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.896827 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd62fb2b-c564-4004-a886-f2d4bd1d3eda" containerName="prometheus" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.897003 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd62fb2b-c564-4004-a886-f2d4bd1d3eda" containerName="prometheus" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.897032 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f39f07b-ae9c-44dd-a512-2642f0a78b07" containerName="mariadb-database-create" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.897049 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd62fb2b-c564-4004-a886-f2d4bd1d3eda" containerName="thanos-sidecar" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.897058 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="83019808-1db2-46ed-87bd-ef75476802e7" containerName="mariadb-database-create" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.897069 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b69c9fa-305e-484f-98e7-c8928bec7a13" containerName="swift-ring-rebalance" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.897081 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="95cb32e0-df2b-453f-9734-d3e37f6aa5eb" containerName="mariadb-account-create-update" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.897093 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="c53aa525-6ef1-4984-803a-fc0466c201b3" containerName="mariadb-account-create-update" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.897106 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd62fb2b-c564-4004-a886-f2d4bd1d3eda" containerName="config-reloader" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.897119 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e9e4686-034e-4478-a278-f2ceb0516da7" containerName="mariadb-account-create-update" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.897127 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="40413731-ae44-4eca-a751-3ba3a4e42558" containerName="mariadb-database-create" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.898592 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.905405 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.907466 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-c4rwz" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.909389 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.909549 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.909689 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.909560 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.909558 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.918188 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.920323 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.924263 4904 scope.go:117] "RemoveContainer" containerID="61c1816af19d2a006e53c2585a487f27ce25eb18f6db49d8b009e2147acec83d" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.939250 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.959827 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1c937d26-4043-491e-8d60-2ac2216169b6-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.959878 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.959920 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.959952 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1c937d26-4043-491e-8d60-2ac2216169b6-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.959976 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.960037 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1c937d26-4043-491e-8d60-2ac2216169b6-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.960070 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkfwt\" (UniqueName: \"kubernetes.io/projected/1c937d26-4043-491e-8d60-2ac2216169b6-kube-api-access-tkfwt\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.960092 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1c937d26-4043-491e-8d60-2ac2216169b6-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.960109 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1c937d26-4043-491e-8d60-2ac2216169b6-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.960124 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.960142 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.960169 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.960236 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-config\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:05 crc kubenswrapper[4904]: I0223 10:24:05.987201 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-82gvj" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.027328 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-82gvj" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.061372 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1c937d26-4043-491e-8d60-2ac2216169b6-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.061454 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkfwt\" (UniqueName: \"kubernetes.io/projected/1c937d26-4043-491e-8d60-2ac2216169b6-kube-api-access-tkfwt\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.061495 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1c937d26-4043-491e-8d60-2ac2216169b6-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.061520 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1c937d26-4043-491e-8d60-2ac2216169b6-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.061539 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.061570 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.061600 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.061662 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-config\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.061694 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1c937d26-4043-491e-8d60-2ac2216169b6-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.061733 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.061759 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.061797 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1c937d26-4043-491e-8d60-2ac2216169b6-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.061824 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.063103 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1c937d26-4043-491e-8d60-2ac2216169b6-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.063253 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1c937d26-4043-491e-8d60-2ac2216169b6-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.063773 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1c937d26-4043-491e-8d60-2ac2216169b6-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.070004 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.072598 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1c937d26-4043-491e-8d60-2ac2216169b6-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.074421 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.075224 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.076895 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.077192 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.077544 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-config\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.077908 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1c937d26-4043-491e-8d60-2ac2216169b6-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.093273 4904 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.093322 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/40819dc70d8445a50995e9a88cc270de788496e012ad6d3d513a831f13ec32aa/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.094463 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkfwt\" (UniqueName: \"kubernetes.io/projected/1c937d26-4043-491e-8d60-2ac2216169b6-kube-api-access-tkfwt\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.140766 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\") pod \"prometheus-metric-storage-0\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.216048 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.292289 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-q7rxf-config-k8qft"] Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.301921 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q7rxf-config-k8qft" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.305416 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.318440 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-mkr6g"] Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.323364 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mkr6g" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.329288 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jjmnj" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.329512 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.330820 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q7rxf-config-k8qft"] Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.344117 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mkr6g"] Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.371433 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cxrw\" (UniqueName: \"kubernetes.io/projected/4613f33e-e2e8-4db0-91d3-3c389a5d8675-kube-api-access-5cxrw\") pod \"ovn-controller-q7rxf-config-k8qft\" (UID: \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\") " pod="openstack/ovn-controller-q7rxf-config-k8qft" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.371482 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4613f33e-e2e8-4db0-91d3-3c389a5d8675-var-log-ovn\") pod \"ovn-controller-q7rxf-config-k8qft\" (UID: \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\") " pod="openstack/ovn-controller-q7rxf-config-k8qft" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.371506 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4613f33e-e2e8-4db0-91d3-3c389a5d8675-var-run-ovn\") pod \"ovn-controller-q7rxf-config-k8qft\" (UID: \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\") " pod="openstack/ovn-controller-q7rxf-config-k8qft" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.371535 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4613f33e-e2e8-4db0-91d3-3c389a5d8675-var-run\") pod \"ovn-controller-q7rxf-config-k8qft\" (UID: \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\") " pod="openstack/ovn-controller-q7rxf-config-k8qft" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.371566 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrp95\" (UniqueName: \"kubernetes.io/projected/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e-kube-api-access-jrp95\") pod \"glance-db-sync-mkr6g\" (UID: \"d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e\") " pod="openstack/glance-db-sync-mkr6g" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.371607 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e-db-sync-config-data\") pod \"glance-db-sync-mkr6g\" (UID: \"d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e\") " pod="openstack/glance-db-sync-mkr6g" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.371624 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4613f33e-e2e8-4db0-91d3-3c389a5d8675-additional-scripts\") pod \"ovn-controller-q7rxf-config-k8qft\" (UID: \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\") " pod="openstack/ovn-controller-q7rxf-config-k8qft" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.371750 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4613f33e-e2e8-4db0-91d3-3c389a5d8675-scripts\") pod \"ovn-controller-q7rxf-config-k8qft\" (UID: \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\") " pod="openstack/ovn-controller-q7rxf-config-k8qft" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.371771 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e-combined-ca-bundle\") pod \"glance-db-sync-mkr6g\" (UID: \"d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e\") " pod="openstack/glance-db-sync-mkr6g" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.371794 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e-config-data\") pod \"glance-db-sync-mkr6g\" (UID: \"d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e\") " pod="openstack/glance-db-sync-mkr6g" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.473094 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4613f33e-e2e8-4db0-91d3-3c389a5d8675-scripts\") pod \"ovn-controller-q7rxf-config-k8qft\" (UID: \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\") " pod="openstack/ovn-controller-q7rxf-config-k8qft" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.473383 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e-combined-ca-bundle\") pod \"glance-db-sync-mkr6g\" (UID: \"d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e\") " pod="openstack/glance-db-sync-mkr6g" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.473409 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e-config-data\") pod \"glance-db-sync-mkr6g\" (UID: \"d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e\") " pod="openstack/glance-db-sync-mkr6g" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.473457 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cxrw\" (UniqueName: \"kubernetes.io/projected/4613f33e-e2e8-4db0-91d3-3c389a5d8675-kube-api-access-5cxrw\") pod \"ovn-controller-q7rxf-config-k8qft\" (UID: \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\") " pod="openstack/ovn-controller-q7rxf-config-k8qft" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.473484 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4613f33e-e2e8-4db0-91d3-3c389a5d8675-var-log-ovn\") pod \"ovn-controller-q7rxf-config-k8qft\" (UID: \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\") " pod="openstack/ovn-controller-q7rxf-config-k8qft" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.473501 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4613f33e-e2e8-4db0-91d3-3c389a5d8675-var-run-ovn\") pod \"ovn-controller-q7rxf-config-k8qft\" (UID: \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\") " pod="openstack/ovn-controller-q7rxf-config-k8qft" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.473530 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4613f33e-e2e8-4db0-91d3-3c389a5d8675-var-run\") pod \"ovn-controller-q7rxf-config-k8qft\" (UID: \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\") " pod="openstack/ovn-controller-q7rxf-config-k8qft" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.473549 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrp95\" (UniqueName: \"kubernetes.io/projected/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e-kube-api-access-jrp95\") pod \"glance-db-sync-mkr6g\" (UID: \"d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e\") " pod="openstack/glance-db-sync-mkr6g" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.473585 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e-db-sync-config-data\") pod \"glance-db-sync-mkr6g\" (UID: \"d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e\") " pod="openstack/glance-db-sync-mkr6g" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.473601 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4613f33e-e2e8-4db0-91d3-3c389a5d8675-additional-scripts\") pod \"ovn-controller-q7rxf-config-k8qft\" (UID: \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\") " pod="openstack/ovn-controller-q7rxf-config-k8qft" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.474358 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4613f33e-e2e8-4db0-91d3-3c389a5d8675-additional-scripts\") pod \"ovn-controller-q7rxf-config-k8qft\" (UID: \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\") " pod="openstack/ovn-controller-q7rxf-config-k8qft" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.475189 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4613f33e-e2e8-4db0-91d3-3c389a5d8675-scripts\") pod \"ovn-controller-q7rxf-config-k8qft\" (UID: \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\") " pod="openstack/ovn-controller-q7rxf-config-k8qft" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.475273 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4613f33e-e2e8-4db0-91d3-3c389a5d8675-var-run-ovn\") pod \"ovn-controller-q7rxf-config-k8qft\" (UID: \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\") " pod="openstack/ovn-controller-q7rxf-config-k8qft" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.475289 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4613f33e-e2e8-4db0-91d3-3c389a5d8675-var-log-ovn\") pod \"ovn-controller-q7rxf-config-k8qft\" (UID: \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\") " pod="openstack/ovn-controller-q7rxf-config-k8qft" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.475387 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4613f33e-e2e8-4db0-91d3-3c389a5d8675-var-run\") pod \"ovn-controller-q7rxf-config-k8qft\" (UID: \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\") " pod="openstack/ovn-controller-q7rxf-config-k8qft" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.479398 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e-combined-ca-bundle\") pod \"glance-db-sync-mkr6g\" (UID: \"d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e\") " pod="openstack/glance-db-sync-mkr6g" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.480315 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e-db-sync-config-data\") pod \"glance-db-sync-mkr6g\" (UID: \"d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e\") " pod="openstack/glance-db-sync-mkr6g" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.483108 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e-config-data\") pod \"glance-db-sync-mkr6g\" (UID: \"d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e\") " pod="openstack/glance-db-sync-mkr6g" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.498430 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cxrw\" (UniqueName: \"kubernetes.io/projected/4613f33e-e2e8-4db0-91d3-3c389a5d8675-kube-api-access-5cxrw\") pod \"ovn-controller-q7rxf-config-k8qft\" (UID: \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\") " pod="openstack/ovn-controller-q7rxf-config-k8qft" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.501687 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrp95\" (UniqueName: \"kubernetes.io/projected/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e-kube-api-access-jrp95\") pod \"glance-db-sync-mkr6g\" (UID: \"d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e\") " pod="openstack/glance-db-sync-mkr6g" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.625129 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q7rxf-config-k8qft" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.640845 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mkr6g" Feb 23 10:24:06 crc kubenswrapper[4904]: I0223 10:24:06.832737 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 10:24:06 crc kubenswrapper[4904]: W0223 10:24:06.837772 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c937d26_4043_491e_8d60_2ac2216169b6.slice/crio-a53a5175a1612d83cdfbc52c5b78adc2e044346d6e028509ac720bbd69a216b1 WatchSource:0}: Error finding container a53a5175a1612d83cdfbc52c5b78adc2e044346d6e028509ac720bbd69a216b1: Status 404 returned error can't find the container with id a53a5175a1612d83cdfbc52c5b78adc2e044346d6e028509ac720bbd69a216b1 Feb 23 10:24:07 crc kubenswrapper[4904]: W0223 10:24:07.205935 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4613f33e_e2e8_4db0_91d3_3c389a5d8675.slice/crio-511017fa0c71012f98538d7374a61ea43ed0d19bfc69187d45360ad850330533 WatchSource:0}: Error finding container 511017fa0c71012f98538d7374a61ea43ed0d19bfc69187d45360ad850330533: Status 404 returned error can't find the container with id 511017fa0c71012f98538d7374a61ea43ed0d19bfc69187d45360ad850330533 Feb 23 10:24:07 crc kubenswrapper[4904]: I0223 10:24:07.211966 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q7rxf-config-k8qft"] Feb 23 10:24:07 crc kubenswrapper[4904]: I0223 10:24:07.272597 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd62fb2b-c564-4004-a886-f2d4bd1d3eda" path="/var/lib/kubelet/pods/dd62fb2b-c564-4004-a886-f2d4bd1d3eda/volumes" Feb 23 10:24:07 crc kubenswrapper[4904]: I0223 10:24:07.344846 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mkr6g"] Feb 23 10:24:07 crc kubenswrapper[4904]: I0223 10:24:07.865878 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q7rxf-config-k8qft" event={"ID":"4613f33e-e2e8-4db0-91d3-3c389a5d8675","Type":"ContainerStarted","Data":"b98c5f0ef3328ef89f141a647dfecd67b5b74ee464f3787b78566d9469ae1315"} Feb 23 10:24:07 crc kubenswrapper[4904]: I0223 10:24:07.866435 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q7rxf-config-k8qft" event={"ID":"4613f33e-e2e8-4db0-91d3-3c389a5d8675","Type":"ContainerStarted","Data":"511017fa0c71012f98538d7374a61ea43ed0d19bfc69187d45360ad850330533"} Feb 23 10:24:07 crc kubenswrapper[4904]: I0223 10:24:07.869681 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1c937d26-4043-491e-8d60-2ac2216169b6","Type":"ContainerStarted","Data":"a53a5175a1612d83cdfbc52c5b78adc2e044346d6e028509ac720bbd69a216b1"} Feb 23 10:24:07 crc kubenswrapper[4904]: I0223 10:24:07.871652 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mkr6g" event={"ID":"d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e","Type":"ContainerStarted","Data":"f9bc793e59e1ff71bdc5419ad06fe48e7ab5fbb530b10c5782fcfe17bf952247"} Feb 23 10:24:07 crc kubenswrapper[4904]: I0223 10:24:07.890084 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-q7rxf-config-k8qft" podStartSLOduration=1.890066368 podStartE2EDuration="1.890066368s" podCreationTimestamp="2026-02-23 10:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:24:07.88941747 +0000 UTC m=+1081.309790983" watchObservedRunningTime="2026-02-23 10:24:07.890066368 +0000 UTC m=+1081.310439881" Feb 23 10:24:08 crc kubenswrapper[4904]: I0223 10:24:08.093015 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lqtz2"] Feb 23 10:24:08 crc kubenswrapper[4904]: I0223 10:24:08.094655 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lqtz2" Feb 23 10:24:08 crc kubenswrapper[4904]: I0223 10:24:08.098458 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 23 10:24:08 crc kubenswrapper[4904]: I0223 10:24:08.102609 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lqtz2"] Feb 23 10:24:08 crc kubenswrapper[4904]: I0223 10:24:08.228560 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c9ln\" (UniqueName: \"kubernetes.io/projected/4d21f024-3e6c-41e0-a11b-0032bd1ec7df-kube-api-access-8c9ln\") pod \"root-account-create-update-lqtz2\" (UID: \"4d21f024-3e6c-41e0-a11b-0032bd1ec7df\") " pod="openstack/root-account-create-update-lqtz2" Feb 23 10:24:08 crc kubenswrapper[4904]: I0223 10:24:08.228906 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d21f024-3e6c-41e0-a11b-0032bd1ec7df-operator-scripts\") pod \"root-account-create-update-lqtz2\" (UID: \"4d21f024-3e6c-41e0-a11b-0032bd1ec7df\") " pod="openstack/root-account-create-update-lqtz2" Feb 23 10:24:08 crc kubenswrapper[4904]: I0223 10:24:08.332148 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c9ln\" (UniqueName: \"kubernetes.io/projected/4d21f024-3e6c-41e0-a11b-0032bd1ec7df-kube-api-access-8c9ln\") pod \"root-account-create-update-lqtz2\" (UID: \"4d21f024-3e6c-41e0-a11b-0032bd1ec7df\") " pod="openstack/root-account-create-update-lqtz2" Feb 23 10:24:08 crc kubenswrapper[4904]: I0223 10:24:08.332327 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d21f024-3e6c-41e0-a11b-0032bd1ec7df-operator-scripts\") pod \"root-account-create-update-lqtz2\" (UID: \"4d21f024-3e6c-41e0-a11b-0032bd1ec7df\") " pod="openstack/root-account-create-update-lqtz2" Feb 23 10:24:08 crc kubenswrapper[4904]: I0223 10:24:08.333921 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d21f024-3e6c-41e0-a11b-0032bd1ec7df-operator-scripts\") pod \"root-account-create-update-lqtz2\" (UID: \"4d21f024-3e6c-41e0-a11b-0032bd1ec7df\") " pod="openstack/root-account-create-update-lqtz2" Feb 23 10:24:08 crc kubenswrapper[4904]: I0223 10:24:08.359618 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c9ln\" (UniqueName: \"kubernetes.io/projected/4d21f024-3e6c-41e0-a11b-0032bd1ec7df-kube-api-access-8c9ln\") pod \"root-account-create-update-lqtz2\" (UID: \"4d21f024-3e6c-41e0-a11b-0032bd1ec7df\") " pod="openstack/root-account-create-update-lqtz2" Feb 23 10:24:08 crc kubenswrapper[4904]: I0223 10:24:08.494688 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lqtz2" Feb 23 10:24:08 crc kubenswrapper[4904]: I0223 10:24:08.899846 4904 generic.go:334] "Generic (PLEG): container finished" podID="4613f33e-e2e8-4db0-91d3-3c389a5d8675" containerID="b98c5f0ef3328ef89f141a647dfecd67b5b74ee464f3787b78566d9469ae1315" exitCode=0 Feb 23 10:24:08 crc kubenswrapper[4904]: I0223 10:24:08.899959 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q7rxf-config-k8qft" event={"ID":"4613f33e-e2e8-4db0-91d3-3c389a5d8675","Type":"ContainerDied","Data":"b98c5f0ef3328ef89f141a647dfecd67b5b74ee464f3787b78566d9469ae1315"} Feb 23 10:24:08 crc kubenswrapper[4904]: I0223 10:24:08.999178 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lqtz2"] Feb 23 10:24:09 crc kubenswrapper[4904]: W0223 10:24:09.022946 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d21f024_3e6c_41e0_a11b_0032bd1ec7df.slice/crio-d3b4407f9e7949001403348d72197616b57209faa6baee426da841ea7562f91a WatchSource:0}: Error finding container d3b4407f9e7949001403348d72197616b57209faa6baee426da841ea7562f91a: Status 404 returned error can't find the container with id d3b4407f9e7949001403348d72197616b57209faa6baee426da841ea7562f91a Feb 23 10:24:09 crc kubenswrapper[4904]: I0223 10:24:09.910728 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1c937d26-4043-491e-8d60-2ac2216169b6","Type":"ContainerStarted","Data":"5a489beb4de90f6dddc4bd2689ad37d8857caf8b7ac00e673b7777d4793ec722"} Feb 23 10:24:09 crc kubenswrapper[4904]: I0223 10:24:09.913635 4904 generic.go:334] "Generic (PLEG): container finished" podID="4d21f024-3e6c-41e0-a11b-0032bd1ec7df" containerID="99221c4a0d63aefd0b780abbc1ade1fca7299f3b0c83fb00bef8e9e5c58028ec" exitCode=0 Feb 23 10:24:09 crc kubenswrapper[4904]: I0223 10:24:09.913757 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lqtz2" event={"ID":"4d21f024-3e6c-41e0-a11b-0032bd1ec7df","Type":"ContainerDied","Data":"99221c4a0d63aefd0b780abbc1ade1fca7299f3b0c83fb00bef8e9e5c58028ec"} Feb 23 10:24:09 crc kubenswrapper[4904]: I0223 10:24:09.913792 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lqtz2" event={"ID":"4d21f024-3e6c-41e0-a11b-0032bd1ec7df","Type":"ContainerStarted","Data":"d3b4407f9e7949001403348d72197616b57209faa6baee426da841ea7562f91a"} Feb 23 10:24:10 crc kubenswrapper[4904]: I0223 10:24:10.289130 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q7rxf-config-k8qft" Feb 23 10:24:10 crc kubenswrapper[4904]: I0223 10:24:10.374223 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4613f33e-e2e8-4db0-91d3-3c389a5d8675-var-run-ovn\") pod \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\" (UID: \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\") " Feb 23 10:24:10 crc kubenswrapper[4904]: I0223 10:24:10.374272 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4613f33e-e2e8-4db0-91d3-3c389a5d8675-var-log-ovn\") pod \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\" (UID: \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\") " Feb 23 10:24:10 crc kubenswrapper[4904]: I0223 10:24:10.374382 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4613f33e-e2e8-4db0-91d3-3c389a5d8675-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4613f33e-e2e8-4db0-91d3-3c389a5d8675" (UID: "4613f33e-e2e8-4db0-91d3-3c389a5d8675"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:24:10 crc kubenswrapper[4904]: I0223 10:24:10.374400 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4613f33e-e2e8-4db0-91d3-3c389a5d8675-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4613f33e-e2e8-4db0-91d3-3c389a5d8675" (UID: "4613f33e-e2e8-4db0-91d3-3c389a5d8675"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:24:10 crc kubenswrapper[4904]: I0223 10:24:10.374423 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4613f33e-e2e8-4db0-91d3-3c389a5d8675-additional-scripts\") pod \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\" (UID: \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\") " Feb 23 10:24:10 crc kubenswrapper[4904]: I0223 10:24:10.374616 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4613f33e-e2e8-4db0-91d3-3c389a5d8675-scripts\") pod \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\" (UID: \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\") " Feb 23 10:24:10 crc kubenswrapper[4904]: I0223 10:24:10.374831 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4613f33e-e2e8-4db0-91d3-3c389a5d8675-var-run\") pod \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\" (UID: \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\") " Feb 23 10:24:10 crc kubenswrapper[4904]: I0223 10:24:10.374865 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cxrw\" (UniqueName: \"kubernetes.io/projected/4613f33e-e2e8-4db0-91d3-3c389a5d8675-kube-api-access-5cxrw\") pod \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\" (UID: \"4613f33e-e2e8-4db0-91d3-3c389a5d8675\") " Feb 23 10:24:10 crc kubenswrapper[4904]: I0223 10:24:10.375525 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4613f33e-e2e8-4db0-91d3-3c389a5d8675-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "4613f33e-e2e8-4db0-91d3-3c389a5d8675" (UID: "4613f33e-e2e8-4db0-91d3-3c389a5d8675"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:10 crc kubenswrapper[4904]: I0223 10:24:10.375784 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4613f33e-e2e8-4db0-91d3-3c389a5d8675-scripts" (OuterVolumeSpecName: "scripts") pod "4613f33e-e2e8-4db0-91d3-3c389a5d8675" (UID: "4613f33e-e2e8-4db0-91d3-3c389a5d8675"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:10 crc kubenswrapper[4904]: I0223 10:24:10.375825 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4613f33e-e2e8-4db0-91d3-3c389a5d8675-var-run" (OuterVolumeSpecName: "var-run") pod "4613f33e-e2e8-4db0-91d3-3c389a5d8675" (UID: "4613f33e-e2e8-4db0-91d3-3c389a5d8675"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:24:10 crc kubenswrapper[4904]: I0223 10:24:10.376026 4904 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4613f33e-e2e8-4db0-91d3-3c389a5d8675-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:10 crc kubenswrapper[4904]: I0223 10:24:10.376048 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4613f33e-e2e8-4db0-91d3-3c389a5d8675-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:10 crc kubenswrapper[4904]: I0223 10:24:10.376059 4904 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4613f33e-e2e8-4db0-91d3-3c389a5d8675-var-run\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:10 crc kubenswrapper[4904]: I0223 10:24:10.376070 4904 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4613f33e-e2e8-4db0-91d3-3c389a5d8675-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:10 crc kubenswrapper[4904]: I0223 10:24:10.376081 4904 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4613f33e-e2e8-4db0-91d3-3c389a5d8675-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:10 crc kubenswrapper[4904]: I0223 10:24:10.385116 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4613f33e-e2e8-4db0-91d3-3c389a5d8675-kube-api-access-5cxrw" (OuterVolumeSpecName: "kube-api-access-5cxrw") pod "4613f33e-e2e8-4db0-91d3-3c389a5d8675" (UID: "4613f33e-e2e8-4db0-91d3-3c389a5d8675"). InnerVolumeSpecName "kube-api-access-5cxrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:24:10 crc kubenswrapper[4904]: I0223 10:24:10.441969 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-q7rxf-config-k8qft"] Feb 23 10:24:10 crc kubenswrapper[4904]: I0223 10:24:10.450576 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-q7rxf-config-k8qft"] Feb 23 10:24:10 crc kubenswrapper[4904]: I0223 10:24:10.477961 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cxrw\" (UniqueName: \"kubernetes.io/projected/4613f33e-e2e8-4db0-91d3-3c389a5d8675-kube-api-access-5cxrw\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:10 crc kubenswrapper[4904]: I0223 10:24:10.929608 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-q7rxf" Feb 23 10:24:10 crc kubenswrapper[4904]: I0223 10:24:10.934500 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="511017fa0c71012f98538d7374a61ea43ed0d19bfc69187d45360ad850330533" Feb 23 10:24:10 crc kubenswrapper[4904]: I0223 10:24:10.934633 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q7rxf-config-k8qft" Feb 23 10:24:11 crc kubenswrapper[4904]: I0223 10:24:11.269177 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4613f33e-e2e8-4db0-91d3-3c389a5d8675" path="/var/lib/kubelet/pods/4613f33e-e2e8-4db0-91d3-3c389a5d8675/volumes" Feb 23 10:24:11 crc kubenswrapper[4904]: I0223 10:24:11.292668 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lqtz2" Feb 23 10:24:11 crc kubenswrapper[4904]: I0223 10:24:11.395890 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d21f024-3e6c-41e0-a11b-0032bd1ec7df-operator-scripts\") pod \"4d21f024-3e6c-41e0-a11b-0032bd1ec7df\" (UID: \"4d21f024-3e6c-41e0-a11b-0032bd1ec7df\") " Feb 23 10:24:11 crc kubenswrapper[4904]: I0223 10:24:11.396022 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c9ln\" (UniqueName: \"kubernetes.io/projected/4d21f024-3e6c-41e0-a11b-0032bd1ec7df-kube-api-access-8c9ln\") pod \"4d21f024-3e6c-41e0-a11b-0032bd1ec7df\" (UID: \"4d21f024-3e6c-41e0-a11b-0032bd1ec7df\") " Feb 23 10:24:11 crc kubenswrapper[4904]: I0223 10:24:11.396871 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d21f024-3e6c-41e0-a11b-0032bd1ec7df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d21f024-3e6c-41e0-a11b-0032bd1ec7df" (UID: "4d21f024-3e6c-41e0-a11b-0032bd1ec7df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:11 crc kubenswrapper[4904]: I0223 10:24:11.405246 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d21f024-3e6c-41e0-a11b-0032bd1ec7df-kube-api-access-8c9ln" (OuterVolumeSpecName: "kube-api-access-8c9ln") pod "4d21f024-3e6c-41e0-a11b-0032bd1ec7df" (UID: "4d21f024-3e6c-41e0-a11b-0032bd1ec7df"). InnerVolumeSpecName "kube-api-access-8c9ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:24:11 crc kubenswrapper[4904]: I0223 10:24:11.498946 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c9ln\" (UniqueName: \"kubernetes.io/projected/4d21f024-3e6c-41e0-a11b-0032bd1ec7df-kube-api-access-8c9ln\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:11 crc kubenswrapper[4904]: I0223 10:24:11.498990 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d21f024-3e6c-41e0-a11b-0032bd1ec7df-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:11 crc kubenswrapper[4904]: I0223 10:24:11.703509 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/71f24a32-6e0a-4a39-9570-92c373672a9b-etc-swift\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") " pod="openstack/swift-storage-0" Feb 23 10:24:11 crc kubenswrapper[4904]: I0223 10:24:11.710505 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/71f24a32-6e0a-4a39-9570-92c373672a9b-etc-swift\") pod \"swift-storage-0\" (UID: \"71f24a32-6e0a-4a39-9570-92c373672a9b\") " pod="openstack/swift-storage-0" Feb 23 10:24:11 crc kubenswrapper[4904]: I0223 10:24:11.725316 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 23 10:24:11 crc kubenswrapper[4904]: I0223 10:24:11.956559 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lqtz2" event={"ID":"4d21f024-3e6c-41e0-a11b-0032bd1ec7df","Type":"ContainerDied","Data":"d3b4407f9e7949001403348d72197616b57209faa6baee426da841ea7562f91a"} Feb 23 10:24:11 crc kubenswrapper[4904]: I0223 10:24:11.956981 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3b4407f9e7949001403348d72197616b57209faa6baee426da841ea7562f91a" Feb 23 10:24:11 crc kubenswrapper[4904]: I0223 10:24:11.957062 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lqtz2" Feb 23 10:24:12 crc kubenswrapper[4904]: I0223 10:24:12.206013 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 23 10:24:12 crc kubenswrapper[4904]: W0223 10:24:12.219390 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71f24a32_6e0a_4a39_9570_92c373672a9b.slice/crio-caa7947a40498b415d1e7601abf1b30b21976ccef2fa35f5b9dd9e5c24cc9f55 WatchSource:0}: Error finding container caa7947a40498b415d1e7601abf1b30b21976ccef2fa35f5b9dd9e5c24cc9f55: Status 404 returned error can't find the container with id caa7947a40498b415d1e7601abf1b30b21976ccef2fa35f5b9dd9e5c24cc9f55 Feb 23 10:24:12 crc kubenswrapper[4904]: I0223 10:24:12.972318 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"71f24a32-6e0a-4a39-9570-92c373672a9b","Type":"ContainerStarted","Data":"caa7947a40498b415d1e7601abf1b30b21976ccef2fa35f5b9dd9e5c24cc9f55"} Feb 23 10:24:12 crc kubenswrapper[4904]: I0223 10:24:12.975521 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 23 10:24:13 crc kubenswrapper[4904]: I0223 10:24:13.362098 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:24:14 crc kubenswrapper[4904]: I0223 10:24:14.995677 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"71f24a32-6e0a-4a39-9570-92c373672a9b","Type":"ContainerStarted","Data":"88ba76809e31937a857fc3f52e710a5584fad7405aee492cbf89aef27ca4ba81"} Feb 23 10:24:14 crc kubenswrapper[4904]: I0223 10:24:14.996751 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"71f24a32-6e0a-4a39-9570-92c373672a9b","Type":"ContainerStarted","Data":"a5763be61ba3393a12647c0e5bfb6b3ce15fd42e6bb28c75c3094968abdf7228"} Feb 23 10:24:14 crc kubenswrapper[4904]: I0223 10:24:14.996765 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"71f24a32-6e0a-4a39-9570-92c373672a9b","Type":"ContainerStarted","Data":"5cd81b5e00b5be4e965a73553cd457ae6e8d0eaf10e3be93b4d0abd2978386dc"} Feb 23 10:24:17 crc kubenswrapper[4904]: I0223 10:24:17.020044 4904 generic.go:334] "Generic (PLEG): container finished" podID="1c937d26-4043-491e-8d60-2ac2216169b6" containerID="5a489beb4de90f6dddc4bd2689ad37d8857caf8b7ac00e673b7777d4793ec722" exitCode=0 Feb 23 10:24:17 crc kubenswrapper[4904]: I0223 10:24:17.020164 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1c937d26-4043-491e-8d60-2ac2216169b6","Type":"ContainerDied","Data":"5a489beb4de90f6dddc4bd2689ad37d8857caf8b7ac00e673b7777d4793ec722"} Feb 23 10:24:22 crc kubenswrapper[4904]: I0223 10:24:22.980980 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.108512 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"71f24a32-6e0a-4a39-9570-92c373672a9b","Type":"ContainerStarted","Data":"960d011a56c222e268faba4274fd4cb5341216c7b75f6023d7116a7b59f2aa60"} Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.122224 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1c937d26-4043-491e-8d60-2ac2216169b6","Type":"ContainerStarted","Data":"96647140c6dd17d6322f7c14c7bcfcb6840b9849a7856424100c7345c067f6ca"} Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.328544 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-rf6dz"] Feb 23 10:24:23 crc kubenswrapper[4904]: E0223 10:24:23.334808 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d21f024-3e6c-41e0-a11b-0032bd1ec7df" containerName="mariadb-account-create-update" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.334826 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d21f024-3e6c-41e0-a11b-0032bd1ec7df" containerName="mariadb-account-create-update" Feb 23 10:24:23 crc kubenswrapper[4904]: E0223 10:24:23.334838 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4613f33e-e2e8-4db0-91d3-3c389a5d8675" containerName="ovn-config" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.334846 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="4613f33e-e2e8-4db0-91d3-3c389a5d8675" containerName="ovn-config" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.335020 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d21f024-3e6c-41e0-a11b-0032bd1ec7df" containerName="mariadb-account-create-update" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.335047 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="4613f33e-e2e8-4db0-91d3-3c389a5d8675" containerName="ovn-config" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.335629 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-rf6dz" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.348160 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-djcln" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.348303 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.350618 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-rf6dz"] Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.442122 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-rbxsv"] Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.443439 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rbxsv" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.472122 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7-combined-ca-bundle\") pod \"watcher-db-sync-rf6dz\" (UID: \"1ea2afdb-ef40-490c-9b9c-5acd41bcbee7\") " pod="openstack/watcher-db-sync-rf6dz" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.472211 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5wn8\" (UniqueName: \"kubernetes.io/projected/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7-kube-api-access-k5wn8\") pod \"watcher-db-sync-rf6dz\" (UID: \"1ea2afdb-ef40-490c-9b9c-5acd41bcbee7\") " pod="openstack/watcher-db-sync-rf6dz" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.472249 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7-db-sync-config-data\") pod \"watcher-db-sync-rf6dz\" (UID: \"1ea2afdb-ef40-490c-9b9c-5acd41bcbee7\") " pod="openstack/watcher-db-sync-rf6dz" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.472276 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7-config-data\") pod \"watcher-db-sync-rf6dz\" (UID: \"1ea2afdb-ef40-490c-9b9c-5acd41bcbee7\") " pod="openstack/watcher-db-sync-rf6dz" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.485844 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-rbxsv"] Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.567490 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db0b-account-create-update-xj8cx"] Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.572007 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db0b-account-create-update-xj8cx" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.573547 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7-combined-ca-bundle\") pod \"watcher-db-sync-rf6dz\" (UID: \"1ea2afdb-ef40-490c-9b9c-5acd41bcbee7\") " pod="openstack/watcher-db-sync-rf6dz" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.573611 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5wn8\" (UniqueName: \"kubernetes.io/projected/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7-kube-api-access-k5wn8\") pod \"watcher-db-sync-rf6dz\" (UID: \"1ea2afdb-ef40-490c-9b9c-5acd41bcbee7\") " pod="openstack/watcher-db-sync-rf6dz" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.573645 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7-db-sync-config-data\") pod \"watcher-db-sync-rf6dz\" (UID: \"1ea2afdb-ef40-490c-9b9c-5acd41bcbee7\") " pod="openstack/watcher-db-sync-rf6dz" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.573678 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7-config-data\") pod \"watcher-db-sync-rf6dz\" (UID: \"1ea2afdb-ef40-490c-9b9c-5acd41bcbee7\") " pod="openstack/watcher-db-sync-rf6dz" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.573706 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-882pg\" (UniqueName: \"kubernetes.io/projected/62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6-kube-api-access-882pg\") pod \"cinder-db-create-rbxsv\" (UID: \"62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6\") " pod="openstack/cinder-db-create-rbxsv" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.573752 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6-operator-scripts\") pod \"cinder-db-create-rbxsv\" (UID: \"62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6\") " pod="openstack/cinder-db-create-rbxsv" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.574834 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.584620 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7-config-data\") pod \"watcher-db-sync-rf6dz\" (UID: \"1ea2afdb-ef40-490c-9b9c-5acd41bcbee7\") " pod="openstack/watcher-db-sync-rf6dz" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.585392 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7-combined-ca-bundle\") pod \"watcher-db-sync-rf6dz\" (UID: \"1ea2afdb-ef40-490c-9b9c-5acd41bcbee7\") " pod="openstack/watcher-db-sync-rf6dz" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.601892 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db0b-account-create-update-xj8cx"] Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.602142 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7-db-sync-config-data\") pod \"watcher-db-sync-rf6dz\" (UID: \"1ea2afdb-ef40-490c-9b9c-5acd41bcbee7\") " pod="openstack/watcher-db-sync-rf6dz" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.605478 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5wn8\" (UniqueName: \"kubernetes.io/projected/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7-kube-api-access-k5wn8\") pod \"watcher-db-sync-rf6dz\" (UID: \"1ea2afdb-ef40-490c-9b9c-5acd41bcbee7\") " pod="openstack/watcher-db-sync-rf6dz" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.669271 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-rf6dz" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.675228 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-882pg\" (UniqueName: \"kubernetes.io/projected/62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6-kube-api-access-882pg\") pod \"cinder-db-create-rbxsv\" (UID: \"62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6\") " pod="openstack/cinder-db-create-rbxsv" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.675294 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6-operator-scripts\") pod \"cinder-db-create-rbxsv\" (UID: \"62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6\") " pod="openstack/cinder-db-create-rbxsv" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.675324 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk94x\" (UniqueName: \"kubernetes.io/projected/85242600-b4c3-4941-93fa-1cf4cb16e6cc-kube-api-access-lk94x\") pod \"cinder-db0b-account-create-update-xj8cx\" (UID: \"85242600-b4c3-4941-93fa-1cf4cb16e6cc\") " pod="openstack/cinder-db0b-account-create-update-xj8cx" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.675434 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85242600-b4c3-4941-93fa-1cf4cb16e6cc-operator-scripts\") pod \"cinder-db0b-account-create-update-xj8cx\" (UID: \"85242600-b4c3-4941-93fa-1cf4cb16e6cc\") " pod="openstack/cinder-db0b-account-create-update-xj8cx" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.676496 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6-operator-scripts\") pod \"cinder-db-create-rbxsv\" (UID: \"62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6\") " pod="openstack/cinder-db-create-rbxsv" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.687795 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-65pr4"] Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.689003 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-65pr4" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.699282 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-882pg\" (UniqueName: \"kubernetes.io/projected/62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6-kube-api-access-882pg\") pod \"cinder-db-create-rbxsv\" (UID: \"62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6\") " pod="openstack/cinder-db-create-rbxsv" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.731104 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-65pr4"] Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.781957 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85242600-b4c3-4941-93fa-1cf4cb16e6cc-operator-scripts\") pod \"cinder-db0b-account-create-update-xj8cx\" (UID: \"85242600-b4c3-4941-93fa-1cf4cb16e6cc\") " pod="openstack/cinder-db0b-account-create-update-xj8cx" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.782026 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51d23998-f8e9-45bc-9af7-f37221dc0390-operator-scripts\") pod \"barbican-db-create-65pr4\" (UID: \"51d23998-f8e9-45bc-9af7-f37221dc0390\") " pod="openstack/barbican-db-create-65pr4" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.782095 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk94x\" (UniqueName: \"kubernetes.io/projected/85242600-b4c3-4941-93fa-1cf4cb16e6cc-kube-api-access-lk94x\") pod \"cinder-db0b-account-create-update-xj8cx\" (UID: \"85242600-b4c3-4941-93fa-1cf4cb16e6cc\") " pod="openstack/cinder-db0b-account-create-update-xj8cx" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.782161 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnvk7\" (UniqueName: \"kubernetes.io/projected/51d23998-f8e9-45bc-9af7-f37221dc0390-kube-api-access-wnvk7\") pod \"barbican-db-create-65pr4\" (UID: \"51d23998-f8e9-45bc-9af7-f37221dc0390\") " pod="openstack/barbican-db-create-65pr4" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.783081 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85242600-b4c3-4941-93fa-1cf4cb16e6cc-operator-scripts\") pod \"cinder-db0b-account-create-update-xj8cx\" (UID: \"85242600-b4c3-4941-93fa-1cf4cb16e6cc\") " pod="openstack/cinder-db0b-account-create-update-xj8cx" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.789019 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rbxsv" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.793211 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-wczj2"] Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.794616 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wczj2" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.823799 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-wczj2"] Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.829788 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-3b16-account-create-update-z7z8t"] Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.830930 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3b16-account-create-update-z7z8t" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.835113 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.835870 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk94x\" (UniqueName: \"kubernetes.io/projected/85242600-b4c3-4941-93fa-1cf4cb16e6cc-kube-api-access-lk94x\") pod \"cinder-db0b-account-create-update-xj8cx\" (UID: \"85242600-b4c3-4941-93fa-1cf4cb16e6cc\") " pod="openstack/cinder-db0b-account-create-update-xj8cx" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.864898 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3b16-account-create-update-z7z8t"] Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.887322 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92vfk\" (UniqueName: \"kubernetes.io/projected/48c89630-853b-4415-88b0-2282c4a9e9f9-kube-api-access-92vfk\") pod \"neutron-db-create-wczj2\" (UID: \"48c89630-853b-4415-88b0-2282c4a9e9f9\") " pod="openstack/neutron-db-create-wczj2" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.887462 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51d23998-f8e9-45bc-9af7-f37221dc0390-operator-scripts\") pod \"barbican-db-create-65pr4\" (UID: \"51d23998-f8e9-45bc-9af7-f37221dc0390\") " pod="openstack/barbican-db-create-65pr4" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.887524 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqfw2\" (UniqueName: \"kubernetes.io/projected/40418ddd-1e86-4037-b7fc-8f2ff25f6b3c-kube-api-access-gqfw2\") pod \"barbican-3b16-account-create-update-z7z8t\" (UID: \"40418ddd-1e86-4037-b7fc-8f2ff25f6b3c\") " pod="openstack/barbican-3b16-account-create-update-z7z8t" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.887630 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40418ddd-1e86-4037-b7fc-8f2ff25f6b3c-operator-scripts\") pod \"barbican-3b16-account-create-update-z7z8t\" (UID: \"40418ddd-1e86-4037-b7fc-8f2ff25f6b3c\") " pod="openstack/barbican-3b16-account-create-update-z7z8t" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.887680 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48c89630-853b-4415-88b0-2282c4a9e9f9-operator-scripts\") pod \"neutron-db-create-wczj2\" (UID: \"48c89630-853b-4415-88b0-2282c4a9e9f9\") " pod="openstack/neutron-db-create-wczj2" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.887734 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnvk7\" (UniqueName: \"kubernetes.io/projected/51d23998-f8e9-45bc-9af7-f37221dc0390-kube-api-access-wnvk7\") pod \"barbican-db-create-65pr4\" (UID: \"51d23998-f8e9-45bc-9af7-f37221dc0390\") " pod="openstack/barbican-db-create-65pr4" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.896483 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51d23998-f8e9-45bc-9af7-f37221dc0390-operator-scripts\") pod \"barbican-db-create-65pr4\" (UID: \"51d23998-f8e9-45bc-9af7-f37221dc0390\") " pod="openstack/barbican-db-create-65pr4" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.939053 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-kt929"] Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.940459 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnvk7\" (UniqueName: \"kubernetes.io/projected/51d23998-f8e9-45bc-9af7-f37221dc0390-kube-api-access-wnvk7\") pod \"barbican-db-create-65pr4\" (UID: \"51d23998-f8e9-45bc-9af7-f37221dc0390\") " pod="openstack/barbican-db-create-65pr4" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.944668 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kt929" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.948690 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.948876 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.948995 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-c598d" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.949114 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.955904 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-kt929"] Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.965488 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db0b-account-create-update-xj8cx" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.991229 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92vfk\" (UniqueName: \"kubernetes.io/projected/48c89630-853b-4415-88b0-2282c4a9e9f9-kube-api-access-92vfk\") pod \"neutron-db-create-wczj2\" (UID: \"48c89630-853b-4415-88b0-2282c4a9e9f9\") " pod="openstack/neutron-db-create-wczj2" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.991292 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d1a74dc-f43f-4b29-ab12-8d5169a4b69d-combined-ca-bundle\") pod \"keystone-db-sync-kt929\" (UID: \"9d1a74dc-f43f-4b29-ab12-8d5169a4b69d\") " pod="openstack/keystone-db-sync-kt929" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.991324 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqfw2\" (UniqueName: \"kubernetes.io/projected/40418ddd-1e86-4037-b7fc-8f2ff25f6b3c-kube-api-access-gqfw2\") pod \"barbican-3b16-account-create-update-z7z8t\" (UID: \"40418ddd-1e86-4037-b7fc-8f2ff25f6b3c\") " pod="openstack/barbican-3b16-account-create-update-z7z8t" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.991382 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40418ddd-1e86-4037-b7fc-8f2ff25f6b3c-operator-scripts\") pod \"barbican-3b16-account-create-update-z7z8t\" (UID: \"40418ddd-1e86-4037-b7fc-8f2ff25f6b3c\") " pod="openstack/barbican-3b16-account-create-update-z7z8t" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.991415 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48c89630-853b-4415-88b0-2282c4a9e9f9-operator-scripts\") pod \"neutron-db-create-wczj2\" (UID: \"48c89630-853b-4415-88b0-2282c4a9e9f9\") " pod="openstack/neutron-db-create-wczj2" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.991432 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d1a74dc-f43f-4b29-ab12-8d5169a4b69d-config-data\") pod \"keystone-db-sync-kt929\" (UID: \"9d1a74dc-f43f-4b29-ab12-8d5169a4b69d\") " pod="openstack/keystone-db-sync-kt929" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.991460 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt5jp\" (UniqueName: \"kubernetes.io/projected/9d1a74dc-f43f-4b29-ab12-8d5169a4b69d-kube-api-access-pt5jp\") pod \"keystone-db-sync-kt929\" (UID: \"9d1a74dc-f43f-4b29-ab12-8d5169a4b69d\") " pod="openstack/keystone-db-sync-kt929" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.992529 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40418ddd-1e86-4037-b7fc-8f2ff25f6b3c-operator-scripts\") pod \"barbican-3b16-account-create-update-z7z8t\" (UID: \"40418ddd-1e86-4037-b7fc-8f2ff25f6b3c\") " pod="openstack/barbican-3b16-account-create-update-z7z8t" Feb 23 10:24:23 crc kubenswrapper[4904]: I0223 10:24:23.993057 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48c89630-853b-4415-88b0-2282c4a9e9f9-operator-scripts\") pod \"neutron-db-create-wczj2\" (UID: \"48c89630-853b-4415-88b0-2282c4a9e9f9\") " pod="openstack/neutron-db-create-wczj2" Feb 23 10:24:24 crc kubenswrapper[4904]: I0223 10:24:24.027299 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqfw2\" (UniqueName: \"kubernetes.io/projected/40418ddd-1e86-4037-b7fc-8f2ff25f6b3c-kube-api-access-gqfw2\") pod \"barbican-3b16-account-create-update-z7z8t\" (UID: \"40418ddd-1e86-4037-b7fc-8f2ff25f6b3c\") " pod="openstack/barbican-3b16-account-create-update-z7z8t" Feb 23 10:24:24 crc kubenswrapper[4904]: I0223 10:24:24.037638 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92vfk\" (UniqueName: \"kubernetes.io/projected/48c89630-853b-4415-88b0-2282c4a9e9f9-kube-api-access-92vfk\") pod \"neutron-db-create-wczj2\" (UID: \"48c89630-853b-4415-88b0-2282c4a9e9f9\") " pod="openstack/neutron-db-create-wczj2" Feb 23 10:24:24 crc kubenswrapper[4904]: I0223 10:24:24.078335 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7058-account-create-update-9h8tl"] Feb 23 10:24:24 crc kubenswrapper[4904]: I0223 10:24:24.080238 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7058-account-create-update-9h8tl" Feb 23 10:24:24 crc kubenswrapper[4904]: I0223 10:24:24.082442 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 23 10:24:24 crc kubenswrapper[4904]: I0223 10:24:24.097176 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-65pr4" Feb 23 10:24:24 crc kubenswrapper[4904]: I0223 10:24:24.101837 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7058-account-create-update-9h8tl"] Feb 23 10:24:24 crc kubenswrapper[4904]: I0223 10:24:24.109357 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d1a74dc-f43f-4b29-ab12-8d5169a4b69d-config-data\") pod \"keystone-db-sync-kt929\" (UID: \"9d1a74dc-f43f-4b29-ab12-8d5169a4b69d\") " pod="openstack/keystone-db-sync-kt929" Feb 23 10:24:24 crc kubenswrapper[4904]: I0223 10:24:24.109436 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt5jp\" (UniqueName: \"kubernetes.io/projected/9d1a74dc-f43f-4b29-ab12-8d5169a4b69d-kube-api-access-pt5jp\") pod \"keystone-db-sync-kt929\" (UID: \"9d1a74dc-f43f-4b29-ab12-8d5169a4b69d\") " pod="openstack/keystone-db-sync-kt929" Feb 23 10:24:24 crc kubenswrapper[4904]: I0223 10:24:24.109514 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d1a74dc-f43f-4b29-ab12-8d5169a4b69d-combined-ca-bundle\") pod \"keystone-db-sync-kt929\" (UID: \"9d1a74dc-f43f-4b29-ab12-8d5169a4b69d\") " pod="openstack/keystone-db-sync-kt929" Feb 23 10:24:24 crc kubenswrapper[4904]: I0223 10:24:24.113412 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wczj2" Feb 23 10:24:24 crc kubenswrapper[4904]: I0223 10:24:24.119568 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d1a74dc-f43f-4b29-ab12-8d5169a4b69d-combined-ca-bundle\") pod \"keystone-db-sync-kt929\" (UID: \"9d1a74dc-f43f-4b29-ab12-8d5169a4b69d\") " pod="openstack/keystone-db-sync-kt929" Feb 23 10:24:24 crc kubenswrapper[4904]: I0223 10:24:24.120199 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d1a74dc-f43f-4b29-ab12-8d5169a4b69d-config-data\") pod \"keystone-db-sync-kt929\" (UID: \"9d1a74dc-f43f-4b29-ab12-8d5169a4b69d\") " pod="openstack/keystone-db-sync-kt929" Feb 23 10:24:24 crc kubenswrapper[4904]: I0223 10:24:24.155051 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt5jp\" (UniqueName: \"kubernetes.io/projected/9d1a74dc-f43f-4b29-ab12-8d5169a4b69d-kube-api-access-pt5jp\") pod \"keystone-db-sync-kt929\" (UID: \"9d1a74dc-f43f-4b29-ab12-8d5169a4b69d\") " pod="openstack/keystone-db-sync-kt929" Feb 23 10:24:24 crc kubenswrapper[4904]: I0223 10:24:24.156105 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3b16-account-create-update-z7z8t" Feb 23 10:24:24 crc kubenswrapper[4904]: I0223 10:24:24.179200 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mkr6g" event={"ID":"d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e","Type":"ContainerStarted","Data":"a18a085f816229cf166da1f20a67ea25169961a530168ed8c2ec858485860440"} Feb 23 10:24:24 crc kubenswrapper[4904]: I0223 10:24:24.211637 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-mkr6g" podStartSLOduration=2.9759715140000003 podStartE2EDuration="18.211618876s" podCreationTimestamp="2026-02-23 10:24:06 +0000 UTC" firstStartedPulling="2026-02-23 10:24:07.349591682 +0000 UTC m=+1080.769965195" lastFinishedPulling="2026-02-23 10:24:22.585239034 +0000 UTC m=+1096.005612557" observedRunningTime="2026-02-23 10:24:24.211208005 +0000 UTC m=+1097.631581518" watchObservedRunningTime="2026-02-23 10:24:24.211618876 +0000 UTC m=+1097.631992389" Feb 23 10:24:24 crc kubenswrapper[4904]: I0223 10:24:24.212342 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aefea65-709c-4842-88b3-75ecd926e2de-operator-scripts\") pod \"neutron-7058-account-create-update-9h8tl\" (UID: \"4aefea65-709c-4842-88b3-75ecd926e2de\") " pod="openstack/neutron-7058-account-create-update-9h8tl" Feb 23 10:24:24 crc kubenswrapper[4904]: I0223 10:24:24.212436 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mkww\" (UniqueName: \"kubernetes.io/projected/4aefea65-709c-4842-88b3-75ecd926e2de-kube-api-access-2mkww\") pod \"neutron-7058-account-create-update-9h8tl\" (UID: \"4aefea65-709c-4842-88b3-75ecd926e2de\") " pod="openstack/neutron-7058-account-create-update-9h8tl" Feb 23 10:24:24 crc kubenswrapper[4904]: I0223 10:24:24.265937 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kt929" Feb 23 10:24:24 crc kubenswrapper[4904]: I0223 10:24:24.313738 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aefea65-709c-4842-88b3-75ecd926e2de-operator-scripts\") pod \"neutron-7058-account-create-update-9h8tl\" (UID: \"4aefea65-709c-4842-88b3-75ecd926e2de\") " pod="openstack/neutron-7058-account-create-update-9h8tl" Feb 23 10:24:24 crc kubenswrapper[4904]: I0223 10:24:24.313835 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mkww\" (UniqueName: \"kubernetes.io/projected/4aefea65-709c-4842-88b3-75ecd926e2de-kube-api-access-2mkww\") pod \"neutron-7058-account-create-update-9h8tl\" (UID: \"4aefea65-709c-4842-88b3-75ecd926e2de\") " pod="openstack/neutron-7058-account-create-update-9h8tl" Feb 23 10:24:24 crc kubenswrapper[4904]: I0223 10:24:24.317127 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aefea65-709c-4842-88b3-75ecd926e2de-operator-scripts\") pod \"neutron-7058-account-create-update-9h8tl\" (UID: \"4aefea65-709c-4842-88b3-75ecd926e2de\") " pod="openstack/neutron-7058-account-create-update-9h8tl" Feb 23 10:24:25 crc kubenswrapper[4904]: I0223 10:24:24.520311 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mkww\" (UniqueName: \"kubernetes.io/projected/4aefea65-709c-4842-88b3-75ecd926e2de-kube-api-access-2mkww\") pod \"neutron-7058-account-create-update-9h8tl\" (UID: \"4aefea65-709c-4842-88b3-75ecd926e2de\") " pod="openstack/neutron-7058-account-create-update-9h8tl" Feb 23 10:24:25 crc kubenswrapper[4904]: I0223 10:24:24.589329 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-rf6dz"] Feb 23 10:24:25 crc kubenswrapper[4904]: I0223 10:24:24.624031 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-rbxsv"] Feb 23 10:24:25 crc kubenswrapper[4904]: I0223 10:24:24.712260 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7058-account-create-update-9h8tl" Feb 23 10:24:25 crc kubenswrapper[4904]: I0223 10:24:24.784712 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db0b-account-create-update-xj8cx"] Feb 23 10:24:25 crc kubenswrapper[4904]: I0223 10:24:25.209567 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db0b-account-create-update-xj8cx" event={"ID":"85242600-b4c3-4941-93fa-1cf4cb16e6cc","Type":"ContainerStarted","Data":"1af7b2f7f2a035fb2fda2e0df2721a67756c9ba8f42c15cd0c3b9043cb4f85ba"} Feb 23 10:24:25 crc kubenswrapper[4904]: I0223 10:24:25.214329 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-rf6dz" event={"ID":"1ea2afdb-ef40-490c-9b9c-5acd41bcbee7","Type":"ContainerStarted","Data":"8fd841841b8ff06239eff3b2632c7988c180ec617776a66e9d372fbbedba1258"} Feb 23 10:24:25 crc kubenswrapper[4904]: I0223 10:24:25.224029 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rbxsv" event={"ID":"62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6","Type":"ContainerStarted","Data":"21909e9e74f0269701097e90df20554f4e40f1eaf7cc267362b9041da33534ff"} Feb 23 10:24:25 crc kubenswrapper[4904]: I0223 10:24:25.992673 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-65pr4"] Feb 23 10:24:26 crc kubenswrapper[4904]: I0223 10:24:26.178274 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-kt929"] Feb 23 10:24:26 crc kubenswrapper[4904]: I0223 10:24:26.186203 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3b16-account-create-update-z7z8t"] Feb 23 10:24:26 crc kubenswrapper[4904]: I0223 10:24:26.196521 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7058-account-create-update-9h8tl"] Feb 23 10:24:26 crc kubenswrapper[4904]: I0223 10:24:26.223810 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-wczj2"] Feb 23 10:24:26 crc kubenswrapper[4904]: I0223 10:24:26.244544 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1c937d26-4043-491e-8d60-2ac2216169b6","Type":"ContainerStarted","Data":"c198dd66ddc7e5ee4830171dad765fab5ea1988b9d0ee4ebd18f2781141f791e"} Feb 23 10:24:26 crc kubenswrapper[4904]: I0223 10:24:26.248326 4904 generic.go:334] "Generic (PLEG): container finished" podID="85242600-b4c3-4941-93fa-1cf4cb16e6cc" containerID="7ebb474f0b91f4489d7845f31ad72411e599a1b648d9968324346be5acdd48e5" exitCode=0 Feb 23 10:24:26 crc kubenswrapper[4904]: I0223 10:24:26.248415 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db0b-account-create-update-xj8cx" event={"ID":"85242600-b4c3-4941-93fa-1cf4cb16e6cc","Type":"ContainerDied","Data":"7ebb474f0b91f4489d7845f31ad72411e599a1b648d9968324346be5acdd48e5"} Feb 23 10:24:26 crc kubenswrapper[4904]: I0223 10:24:26.253273 4904 generic.go:334] "Generic (PLEG): container finished" podID="62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6" containerID="781e12200c006e5fc6fb53626ef5ac2414111464ba67b4efa6b1ccd7c803d831" exitCode=0 Feb 23 10:24:26 crc kubenswrapper[4904]: I0223 10:24:26.253315 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rbxsv" event={"ID":"62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6","Type":"ContainerDied","Data":"781e12200c006e5fc6fb53626ef5ac2414111464ba67b4efa6b1ccd7c803d831"} Feb 23 10:24:27 crc kubenswrapper[4904]: I0223 10:24:27.277674 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-65pr4" event={"ID":"51d23998-f8e9-45bc-9af7-f37221dc0390","Type":"ContainerStarted","Data":"22ce1c374fe443d47b97bd49505383e2238cad72de5d3df05c096785441eaed1"} Feb 23 10:24:27 crc kubenswrapper[4904]: I0223 10:24:27.280788 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3b16-account-create-update-z7z8t" event={"ID":"40418ddd-1e86-4037-b7fc-8f2ff25f6b3c","Type":"ContainerStarted","Data":"bc293b9a265fbe68cf9b99e5fad8552f798b0a6c4096419dff63ae9542937d39"} Feb 23 10:24:27 crc kubenswrapper[4904]: I0223 10:24:27.282467 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kt929" event={"ID":"9d1a74dc-f43f-4b29-ab12-8d5169a4b69d","Type":"ContainerStarted","Data":"a8363311105e6e2d95a04c1d4feaaf58ff9e63d9e421e549b393386b5b2a839e"} Feb 23 10:24:27 crc kubenswrapper[4904]: I0223 10:24:27.294465 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1c937d26-4043-491e-8d60-2ac2216169b6","Type":"ContainerStarted","Data":"f863103165774fefdba1d9b373e02e8b1778ce9fc42f128424fe5d91c260e4d3"} Feb 23 10:24:27 crc kubenswrapper[4904]: I0223 10:24:27.304528 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7058-account-create-update-9h8tl" event={"ID":"4aefea65-709c-4842-88b3-75ecd926e2de","Type":"ContainerStarted","Data":"f4dc6174edc6ff96f419886efaddc652b0388694ba051c1a0301b2cb391bc72f"} Feb 23 10:24:27 crc kubenswrapper[4904]: I0223 10:24:27.308144 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wczj2" event={"ID":"48c89630-853b-4415-88b0-2282c4a9e9f9","Type":"ContainerStarted","Data":"5b325dbc03a36cfeeded5bd15c6cd97435aefa39d9693925d2369b41c3efea01"} Feb 23 10:24:27 crc kubenswrapper[4904]: I0223 10:24:27.465651 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=22.465605035 podStartE2EDuration="22.465605035s" podCreationTimestamp="2026-02-23 10:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:24:27.458289617 +0000 UTC m=+1100.878663160" watchObservedRunningTime="2026-02-23 10:24:27.465605035 +0000 UTC m=+1100.885978558" Feb 23 10:24:28 crc kubenswrapper[4904]: I0223 10:24:28.318392 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db0b-account-create-update-xj8cx" Feb 23 10:24:28 crc kubenswrapper[4904]: I0223 10:24:28.323920 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db0b-account-create-update-xj8cx" Feb 23 10:24:28 crc kubenswrapper[4904]: I0223 10:24:28.324251 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db0b-account-create-update-xj8cx" event={"ID":"85242600-b4c3-4941-93fa-1cf4cb16e6cc","Type":"ContainerDied","Data":"1af7b2f7f2a035fb2fda2e0df2721a67756c9ba8f42c15cd0c3b9043cb4f85ba"} Feb 23 10:24:28 crc kubenswrapper[4904]: I0223 10:24:28.324291 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1af7b2f7f2a035fb2fda2e0df2721a67756c9ba8f42c15cd0c3b9043cb4f85ba" Feb 23 10:24:28 crc kubenswrapper[4904]: I0223 10:24:28.451425 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85242600-b4c3-4941-93fa-1cf4cb16e6cc-operator-scripts\") pod \"85242600-b4c3-4941-93fa-1cf4cb16e6cc\" (UID: \"85242600-b4c3-4941-93fa-1cf4cb16e6cc\") " Feb 23 10:24:28 crc kubenswrapper[4904]: I0223 10:24:28.451862 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk94x\" (UniqueName: \"kubernetes.io/projected/85242600-b4c3-4941-93fa-1cf4cb16e6cc-kube-api-access-lk94x\") pod \"85242600-b4c3-4941-93fa-1cf4cb16e6cc\" (UID: \"85242600-b4c3-4941-93fa-1cf4cb16e6cc\") " Feb 23 10:24:28 crc kubenswrapper[4904]: I0223 10:24:28.453574 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85242600-b4c3-4941-93fa-1cf4cb16e6cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85242600-b4c3-4941-93fa-1cf4cb16e6cc" (UID: "85242600-b4c3-4941-93fa-1cf4cb16e6cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:28 crc kubenswrapper[4904]: I0223 10:24:28.484465 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85242600-b4c3-4941-93fa-1cf4cb16e6cc-kube-api-access-lk94x" (OuterVolumeSpecName: "kube-api-access-lk94x") pod "85242600-b4c3-4941-93fa-1cf4cb16e6cc" (UID: "85242600-b4c3-4941-93fa-1cf4cb16e6cc"). InnerVolumeSpecName "kube-api-access-lk94x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:24:28 crc kubenswrapper[4904]: I0223 10:24:28.558628 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85242600-b4c3-4941-93fa-1cf4cb16e6cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:28 crc kubenswrapper[4904]: I0223 10:24:28.558671 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk94x\" (UniqueName: \"kubernetes.io/projected/85242600-b4c3-4941-93fa-1cf4cb16e6cc-kube-api-access-lk94x\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:29 crc kubenswrapper[4904]: I0223 10:24:29.359498 4904 generic.go:334] "Generic (PLEG): container finished" podID="4aefea65-709c-4842-88b3-75ecd926e2de" containerID="9f9b33d6ed1075c890d231b22368e9b05b596843faa41986ce8b05deaf62edf9" exitCode=0 Feb 23 10:24:29 crc kubenswrapper[4904]: I0223 10:24:29.359604 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7058-account-create-update-9h8tl" event={"ID":"4aefea65-709c-4842-88b3-75ecd926e2de","Type":"ContainerDied","Data":"9f9b33d6ed1075c890d231b22368e9b05b596843faa41986ce8b05deaf62edf9"} Feb 23 10:24:31 crc kubenswrapper[4904]: I0223 10:24:31.120661 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rbxsv" Feb 23 10:24:31 crc kubenswrapper[4904]: I0223 10:24:31.132258 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7058-account-create-update-9h8tl" Feb 23 10:24:31 crc kubenswrapper[4904]: I0223 10:24:31.216604 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:31 crc kubenswrapper[4904]: I0223 10:24:31.219898 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aefea65-709c-4842-88b3-75ecd926e2de-operator-scripts\") pod \"4aefea65-709c-4842-88b3-75ecd926e2de\" (UID: \"4aefea65-709c-4842-88b3-75ecd926e2de\") " Feb 23 10:24:31 crc kubenswrapper[4904]: I0223 10:24:31.220069 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6-operator-scripts\") pod \"62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6\" (UID: \"62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6\") " Feb 23 10:24:31 crc kubenswrapper[4904]: I0223 10:24:31.220256 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mkww\" (UniqueName: \"kubernetes.io/projected/4aefea65-709c-4842-88b3-75ecd926e2de-kube-api-access-2mkww\") pod \"4aefea65-709c-4842-88b3-75ecd926e2de\" (UID: \"4aefea65-709c-4842-88b3-75ecd926e2de\") " Feb 23 10:24:31 crc kubenswrapper[4904]: I0223 10:24:31.220328 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-882pg\" (UniqueName: \"kubernetes.io/projected/62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6-kube-api-access-882pg\") pod \"62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6\" (UID: \"62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6\") " Feb 23 10:24:31 crc kubenswrapper[4904]: I0223 10:24:31.220558 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aefea65-709c-4842-88b3-75ecd926e2de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4aefea65-709c-4842-88b3-75ecd926e2de" (UID: "4aefea65-709c-4842-88b3-75ecd926e2de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:31 crc kubenswrapper[4904]: I0223 10:24:31.220829 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aefea65-709c-4842-88b3-75ecd926e2de-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:31 crc kubenswrapper[4904]: I0223 10:24:31.220927 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6" (UID: "62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:31 crc kubenswrapper[4904]: I0223 10:24:31.229049 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aefea65-709c-4842-88b3-75ecd926e2de-kube-api-access-2mkww" (OuterVolumeSpecName: "kube-api-access-2mkww") pod "4aefea65-709c-4842-88b3-75ecd926e2de" (UID: "4aefea65-709c-4842-88b3-75ecd926e2de"). InnerVolumeSpecName "kube-api-access-2mkww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:24:31 crc kubenswrapper[4904]: I0223 10:24:31.236847 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6-kube-api-access-882pg" (OuterVolumeSpecName: "kube-api-access-882pg") pod "62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6" (UID: "62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6"). InnerVolumeSpecName "kube-api-access-882pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:24:31 crc kubenswrapper[4904]: I0223 10:24:31.323184 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:31 crc kubenswrapper[4904]: I0223 10:24:31.323220 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mkww\" (UniqueName: \"kubernetes.io/projected/4aefea65-709c-4842-88b3-75ecd926e2de-kube-api-access-2mkww\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:31 crc kubenswrapper[4904]: I0223 10:24:31.323233 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-882pg\" (UniqueName: \"kubernetes.io/projected/62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6-kube-api-access-882pg\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:31 crc kubenswrapper[4904]: I0223 10:24:31.389665 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rbxsv" event={"ID":"62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6","Type":"ContainerDied","Data":"21909e9e74f0269701097e90df20554f4e40f1eaf7cc267362b9041da33534ff"} Feb 23 10:24:31 crc kubenswrapper[4904]: I0223 10:24:31.389748 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21909e9e74f0269701097e90df20554f4e40f1eaf7cc267362b9041da33534ff" Feb 23 10:24:31 crc kubenswrapper[4904]: I0223 10:24:31.389841 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rbxsv" Feb 23 10:24:31 crc kubenswrapper[4904]: I0223 10:24:31.392570 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7058-account-create-update-9h8tl" Feb 23 10:24:31 crc kubenswrapper[4904]: I0223 10:24:31.392576 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7058-account-create-update-9h8tl" event={"ID":"4aefea65-709c-4842-88b3-75ecd926e2de","Type":"ContainerDied","Data":"f4dc6174edc6ff96f419886efaddc652b0388694ba051c1a0301b2cb391bc72f"} Feb 23 10:24:31 crc kubenswrapper[4904]: I0223 10:24:31.392605 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4dc6174edc6ff96f419886efaddc652b0388694ba051c1a0301b2cb391bc72f" Feb 23 10:24:31 crc kubenswrapper[4904]: I0223 10:24:31.396457 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wczj2" event={"ID":"48c89630-853b-4415-88b0-2282c4a9e9f9","Type":"ContainerStarted","Data":"d21eabfba6e7afc6502e5192ff5344a464376ffe90ad8681c214b7d3b00054bd"} Feb 23 10:24:31 crc kubenswrapper[4904]: I0223 10:24:31.423547 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-wczj2" podStartSLOduration=8.423514823 podStartE2EDuration="8.423514823s" podCreationTimestamp="2026-02-23 10:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:24:31.420633141 +0000 UTC m=+1104.841006654" watchObservedRunningTime="2026-02-23 10:24:31.423514823 +0000 UTC m=+1104.843888336" Feb 23 10:24:32 crc kubenswrapper[4904]: I0223 10:24:32.407813 4904 generic.go:334] "Generic (PLEG): container finished" podID="48c89630-853b-4415-88b0-2282c4a9e9f9" containerID="d21eabfba6e7afc6502e5192ff5344a464376ffe90ad8681c214b7d3b00054bd" exitCode=0 Feb 23 10:24:32 crc kubenswrapper[4904]: I0223 10:24:32.407923 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wczj2" event={"ID":"48c89630-853b-4415-88b0-2282c4a9e9f9","Type":"ContainerDied","Data":"d21eabfba6e7afc6502e5192ff5344a464376ffe90ad8681c214b7d3b00054bd"} Feb 23 10:24:33 crc kubenswrapper[4904]: I0223 10:24:33.425762 4904 generic.go:334] "Generic (PLEG): container finished" podID="d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e" containerID="a18a085f816229cf166da1f20a67ea25169961a530168ed8c2ec858485860440" exitCode=0 Feb 23 10:24:33 crc kubenswrapper[4904]: I0223 10:24:33.425821 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mkr6g" event={"ID":"d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e","Type":"ContainerDied","Data":"a18a085f816229cf166da1f20a67ea25169961a530168ed8c2ec858485860440"} Feb 23 10:24:35 crc kubenswrapper[4904]: I0223 10:24:35.770692 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wczj2" Feb 23 10:24:35 crc kubenswrapper[4904]: I0223 10:24:35.790590 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mkr6g" Feb 23 10:24:35 crc kubenswrapper[4904]: I0223 10:24:35.812187 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48c89630-853b-4415-88b0-2282c4a9e9f9-operator-scripts\") pod \"48c89630-853b-4415-88b0-2282c4a9e9f9\" (UID: \"48c89630-853b-4415-88b0-2282c4a9e9f9\") " Feb 23 10:24:35 crc kubenswrapper[4904]: I0223 10:24:35.812366 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92vfk\" (UniqueName: \"kubernetes.io/projected/48c89630-853b-4415-88b0-2282c4a9e9f9-kube-api-access-92vfk\") pod \"48c89630-853b-4415-88b0-2282c4a9e9f9\" (UID: \"48c89630-853b-4415-88b0-2282c4a9e9f9\") " Feb 23 10:24:35 crc kubenswrapper[4904]: I0223 10:24:35.815137 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48c89630-853b-4415-88b0-2282c4a9e9f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48c89630-853b-4415-88b0-2282c4a9e9f9" (UID: "48c89630-853b-4415-88b0-2282c4a9e9f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:35 crc kubenswrapper[4904]: I0223 10:24:35.830404 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48c89630-853b-4415-88b0-2282c4a9e9f9-kube-api-access-92vfk" (OuterVolumeSpecName: "kube-api-access-92vfk") pod "48c89630-853b-4415-88b0-2282c4a9e9f9" (UID: "48c89630-853b-4415-88b0-2282c4a9e9f9"). InnerVolumeSpecName "kube-api-access-92vfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:24:35 crc kubenswrapper[4904]: I0223 10:24:35.917395 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e-config-data\") pod \"d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e\" (UID: \"d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e\") " Feb 23 10:24:35 crc kubenswrapper[4904]: I0223 10:24:35.917548 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrp95\" (UniqueName: \"kubernetes.io/projected/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e-kube-api-access-jrp95\") pod \"d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e\" (UID: \"d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e\") " Feb 23 10:24:35 crc kubenswrapper[4904]: I0223 10:24:35.917790 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e-db-sync-config-data\") pod \"d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e\" (UID: \"d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e\") " Feb 23 10:24:35 crc kubenswrapper[4904]: I0223 10:24:35.917851 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e-combined-ca-bundle\") pod \"d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e\" (UID: \"d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e\") " Feb 23 10:24:35 crc kubenswrapper[4904]: I0223 10:24:35.918318 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48c89630-853b-4415-88b0-2282c4a9e9f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:35 crc kubenswrapper[4904]: I0223 10:24:35.918339 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92vfk\" (UniqueName: \"kubernetes.io/projected/48c89630-853b-4415-88b0-2282c4a9e9f9-kube-api-access-92vfk\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:35 crc kubenswrapper[4904]: I0223 10:24:35.928652 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e" (UID: "d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:24:35 crc kubenswrapper[4904]: I0223 10:24:35.934304 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e-kube-api-access-jrp95" (OuterVolumeSpecName: "kube-api-access-jrp95") pod "d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e" (UID: "d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e"). InnerVolumeSpecName "kube-api-access-jrp95". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:24:35 crc kubenswrapper[4904]: I0223 10:24:35.996363 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e-config-data" (OuterVolumeSpecName: "config-data") pod "d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e" (UID: "d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:24:36 crc kubenswrapper[4904]: I0223 10:24:36.000584 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e" (UID: "d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:24:36 crc kubenswrapper[4904]: I0223 10:24:36.020254 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:36 crc kubenswrapper[4904]: I0223 10:24:36.020704 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrp95\" (UniqueName: \"kubernetes.io/projected/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e-kube-api-access-jrp95\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:36 crc kubenswrapper[4904]: I0223 10:24:36.020735 4904 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:36 crc kubenswrapper[4904]: I0223 10:24:36.020744 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:36 crc kubenswrapper[4904]: I0223 10:24:36.216768 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:36 crc kubenswrapper[4904]: I0223 10:24:36.224670 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:36 crc kubenswrapper[4904]: I0223 10:24:36.463422 4904 generic.go:334] "Generic (PLEG): container finished" podID="51d23998-f8e9-45bc-9af7-f37221dc0390" containerID="eb2a9783514dac00ea7ca2c8b57763124112cf22b8307716d3f78d5fd03aff4a" exitCode=0 Feb 23 10:24:36 crc kubenswrapper[4904]: I0223 10:24:36.463472 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-65pr4" event={"ID":"51d23998-f8e9-45bc-9af7-f37221dc0390","Type":"ContainerDied","Data":"eb2a9783514dac00ea7ca2c8b57763124112cf22b8307716d3f78d5fd03aff4a"} Feb 23 10:24:36 crc kubenswrapper[4904]: I0223 10:24:36.468586 4904 generic.go:334] "Generic (PLEG): container finished" podID="40418ddd-1e86-4037-b7fc-8f2ff25f6b3c" containerID="30091519ac939d0a7863780523865aa01b08fc4f4b609956cc7aeef3d16e8b53" exitCode=0 Feb 23 10:24:36 crc kubenswrapper[4904]: I0223 10:24:36.468723 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3b16-account-create-update-z7z8t" event={"ID":"40418ddd-1e86-4037-b7fc-8f2ff25f6b3c","Type":"ContainerDied","Data":"30091519ac939d0a7863780523865aa01b08fc4f4b609956cc7aeef3d16e8b53"} Feb 23 10:24:36 crc kubenswrapper[4904]: I0223 10:24:36.471070 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wczj2" event={"ID":"48c89630-853b-4415-88b0-2282c4a9e9f9","Type":"ContainerDied","Data":"5b325dbc03a36cfeeded5bd15c6cd97435aefa39d9693925d2369b41c3efea01"} Feb 23 10:24:36 crc kubenswrapper[4904]: I0223 10:24:36.471128 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b325dbc03a36cfeeded5bd15c6cd97435aefa39d9693925d2369b41c3efea01" Feb 23 10:24:36 crc kubenswrapper[4904]: I0223 10:24:36.471095 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wczj2" Feb 23 10:24:36 crc kubenswrapper[4904]: I0223 10:24:36.475401 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mkr6g" event={"ID":"d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e","Type":"ContainerDied","Data":"f9bc793e59e1ff71bdc5419ad06fe48e7ab5fbb530b10c5782fcfe17bf952247"} Feb 23 10:24:36 crc kubenswrapper[4904]: I0223 10:24:36.475453 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9bc793e59e1ff71bdc5419ad06fe48e7ab5fbb530b10c5782fcfe17bf952247" Feb 23 10:24:36 crc kubenswrapper[4904]: I0223 10:24:36.476168 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mkr6g" Feb 23 10:24:36 crc kubenswrapper[4904]: I0223 10:24:36.483609 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.365554 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-srlj6"] Feb 23 10:24:37 crc kubenswrapper[4904]: E0223 10:24:37.366480 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6" containerName="mariadb-database-create" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.366497 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6" containerName="mariadb-database-create" Feb 23 10:24:37 crc kubenswrapper[4904]: E0223 10:24:37.366518 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e" containerName="glance-db-sync" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.366525 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e" containerName="glance-db-sync" Feb 23 10:24:37 crc kubenswrapper[4904]: E0223 10:24:37.366535 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aefea65-709c-4842-88b3-75ecd926e2de" containerName="mariadb-account-create-update" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.366543 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aefea65-709c-4842-88b3-75ecd926e2de" containerName="mariadb-account-create-update" Feb 23 10:24:37 crc kubenswrapper[4904]: E0223 10:24:37.366555 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48c89630-853b-4415-88b0-2282c4a9e9f9" containerName="mariadb-database-create" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.366562 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="48c89630-853b-4415-88b0-2282c4a9e9f9" containerName="mariadb-database-create" Feb 23 10:24:37 crc kubenswrapper[4904]: E0223 10:24:37.366584 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85242600-b4c3-4941-93fa-1cf4cb16e6cc" containerName="mariadb-account-create-update" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.366590 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="85242600-b4c3-4941-93fa-1cf4cb16e6cc" containerName="mariadb-account-create-update" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.366782 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e" containerName="glance-db-sync" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.366800 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6" containerName="mariadb-database-create" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.366807 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aefea65-709c-4842-88b3-75ecd926e2de" containerName="mariadb-account-create-update" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.366817 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="85242600-b4c3-4941-93fa-1cf4cb16e6cc" containerName="mariadb-account-create-update" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.366825 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="48c89630-853b-4415-88b0-2282c4a9e9f9" containerName="mariadb-database-create" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.371646 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.396013 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-srlj6"] Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.456190 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87ac5c77-0a8d-4041-b030-6053e617dbfe-config\") pod \"dnsmasq-dns-5b946c75cc-srlj6\" (UID: \"87ac5c77-0a8d-4041-b030-6053e617dbfe\") " pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.456515 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87ac5c77-0a8d-4041-b030-6053e617dbfe-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-srlj6\" (UID: \"87ac5c77-0a8d-4041-b030-6053e617dbfe\") " pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.456647 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87ac5c77-0a8d-4041-b030-6053e617dbfe-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-srlj6\" (UID: \"87ac5c77-0a8d-4041-b030-6053e617dbfe\") " pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.456788 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87ac5c77-0a8d-4041-b030-6053e617dbfe-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-srlj6\" (UID: \"87ac5c77-0a8d-4041-b030-6053e617dbfe\") " pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.456851 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgpf8\" (UniqueName: \"kubernetes.io/projected/87ac5c77-0a8d-4041-b030-6053e617dbfe-kube-api-access-xgpf8\") pod \"dnsmasq-dns-5b946c75cc-srlj6\" (UID: \"87ac5c77-0a8d-4041-b030-6053e617dbfe\") " pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.558653 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgpf8\" (UniqueName: \"kubernetes.io/projected/87ac5c77-0a8d-4041-b030-6053e617dbfe-kube-api-access-xgpf8\") pod \"dnsmasq-dns-5b946c75cc-srlj6\" (UID: \"87ac5c77-0a8d-4041-b030-6053e617dbfe\") " pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.558795 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87ac5c77-0a8d-4041-b030-6053e617dbfe-config\") pod \"dnsmasq-dns-5b946c75cc-srlj6\" (UID: \"87ac5c77-0a8d-4041-b030-6053e617dbfe\") " pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.558887 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87ac5c77-0a8d-4041-b030-6053e617dbfe-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-srlj6\" (UID: \"87ac5c77-0a8d-4041-b030-6053e617dbfe\") " pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.558926 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87ac5c77-0a8d-4041-b030-6053e617dbfe-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-srlj6\" (UID: \"87ac5c77-0a8d-4041-b030-6053e617dbfe\") " pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.558965 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87ac5c77-0a8d-4041-b030-6053e617dbfe-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-srlj6\" (UID: \"87ac5c77-0a8d-4041-b030-6053e617dbfe\") " pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.560233 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87ac5c77-0a8d-4041-b030-6053e617dbfe-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-srlj6\" (UID: \"87ac5c77-0a8d-4041-b030-6053e617dbfe\") " pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.560498 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87ac5c77-0a8d-4041-b030-6053e617dbfe-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-srlj6\" (UID: \"87ac5c77-0a8d-4041-b030-6053e617dbfe\") " pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.561164 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87ac5c77-0a8d-4041-b030-6053e617dbfe-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-srlj6\" (UID: \"87ac5c77-0a8d-4041-b030-6053e617dbfe\") " pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.561602 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87ac5c77-0a8d-4041-b030-6053e617dbfe-config\") pod \"dnsmasq-dns-5b946c75cc-srlj6\" (UID: \"87ac5c77-0a8d-4041-b030-6053e617dbfe\") " pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.582819 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgpf8\" (UniqueName: \"kubernetes.io/projected/87ac5c77-0a8d-4041-b030-6053e617dbfe-kube-api-access-xgpf8\") pod \"dnsmasq-dns-5b946c75cc-srlj6\" (UID: \"87ac5c77-0a8d-4041-b030-6053e617dbfe\") " pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" Feb 23 10:24:37 crc kubenswrapper[4904]: I0223 10:24:37.707943 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" Feb 23 10:24:41 crc kubenswrapper[4904]: E0223 10:24:41.995029 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Feb 23 10:24:41 crc kubenswrapper[4904]: E0223 10:24:41.996242 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pt5jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-kt929_openstack(9d1a74dc-f43f-4b29-ab12-8d5169a4b69d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 10:24:41 crc kubenswrapper[4904]: E0223 10:24:41.997921 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/keystone-db-sync-kt929" podUID="9d1a74dc-f43f-4b29-ab12-8d5169a4b69d" Feb 23 10:24:42 crc kubenswrapper[4904]: I0223 10:24:42.249102 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3b16-account-create-update-z7z8t" Feb 23 10:24:42 crc kubenswrapper[4904]: I0223 10:24:42.288467 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-65pr4" Feb 23 10:24:42 crc kubenswrapper[4904]: I0223 10:24:42.383545 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40418ddd-1e86-4037-b7fc-8f2ff25f6b3c-operator-scripts\") pod \"40418ddd-1e86-4037-b7fc-8f2ff25f6b3c\" (UID: \"40418ddd-1e86-4037-b7fc-8f2ff25f6b3c\") " Feb 23 10:24:42 crc kubenswrapper[4904]: I0223 10:24:42.383765 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqfw2\" (UniqueName: \"kubernetes.io/projected/40418ddd-1e86-4037-b7fc-8f2ff25f6b3c-kube-api-access-gqfw2\") pod \"40418ddd-1e86-4037-b7fc-8f2ff25f6b3c\" (UID: \"40418ddd-1e86-4037-b7fc-8f2ff25f6b3c\") " Feb 23 10:24:42 crc kubenswrapper[4904]: I0223 10:24:42.383817 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnvk7\" (UniqueName: \"kubernetes.io/projected/51d23998-f8e9-45bc-9af7-f37221dc0390-kube-api-access-wnvk7\") pod \"51d23998-f8e9-45bc-9af7-f37221dc0390\" (UID: \"51d23998-f8e9-45bc-9af7-f37221dc0390\") " Feb 23 10:24:42 crc kubenswrapper[4904]: I0223 10:24:42.384058 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51d23998-f8e9-45bc-9af7-f37221dc0390-operator-scripts\") pod \"51d23998-f8e9-45bc-9af7-f37221dc0390\" (UID: \"51d23998-f8e9-45bc-9af7-f37221dc0390\") " Feb 23 10:24:42 crc kubenswrapper[4904]: I0223 10:24:42.384940 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40418ddd-1e86-4037-b7fc-8f2ff25f6b3c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "40418ddd-1e86-4037-b7fc-8f2ff25f6b3c" (UID: "40418ddd-1e86-4037-b7fc-8f2ff25f6b3c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:42 crc kubenswrapper[4904]: I0223 10:24:42.385355 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51d23998-f8e9-45bc-9af7-f37221dc0390-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51d23998-f8e9-45bc-9af7-f37221dc0390" (UID: "51d23998-f8e9-45bc-9af7-f37221dc0390"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:42 crc kubenswrapper[4904]: I0223 10:24:42.390091 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40418ddd-1e86-4037-b7fc-8f2ff25f6b3c-kube-api-access-gqfw2" (OuterVolumeSpecName: "kube-api-access-gqfw2") pod "40418ddd-1e86-4037-b7fc-8f2ff25f6b3c" (UID: "40418ddd-1e86-4037-b7fc-8f2ff25f6b3c"). InnerVolumeSpecName "kube-api-access-gqfw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:24:42 crc kubenswrapper[4904]: I0223 10:24:42.391991 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51d23998-f8e9-45bc-9af7-f37221dc0390-kube-api-access-wnvk7" (OuterVolumeSpecName: "kube-api-access-wnvk7") pod "51d23998-f8e9-45bc-9af7-f37221dc0390" (UID: "51d23998-f8e9-45bc-9af7-f37221dc0390"). InnerVolumeSpecName "kube-api-access-wnvk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:24:42 crc kubenswrapper[4904]: I0223 10:24:42.486565 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51d23998-f8e9-45bc-9af7-f37221dc0390-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:42 crc kubenswrapper[4904]: I0223 10:24:42.486620 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40418ddd-1e86-4037-b7fc-8f2ff25f6b3c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:42 crc kubenswrapper[4904]: I0223 10:24:42.486632 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqfw2\" (UniqueName: \"kubernetes.io/projected/40418ddd-1e86-4037-b7fc-8f2ff25f6b3c-kube-api-access-gqfw2\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:42 crc kubenswrapper[4904]: I0223 10:24:42.486644 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnvk7\" (UniqueName: \"kubernetes.io/projected/51d23998-f8e9-45bc-9af7-f37221dc0390-kube-api-access-wnvk7\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:42 crc kubenswrapper[4904]: I0223 10:24:42.543787 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-srlj6"] Feb 23 10:24:42 crc kubenswrapper[4904]: W0223 10:24:42.550635 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87ac5c77_0a8d_4041_b030_6053e617dbfe.slice/crio-e621f0f5dab03a0f66ce483901d18546ffc227239fbbf372e0386d444d42cd9c WatchSource:0}: Error finding container e621f0f5dab03a0f66ce483901d18546ffc227239fbbf372e0386d444d42cd9c: Status 404 returned error can't find the container with id e621f0f5dab03a0f66ce483901d18546ffc227239fbbf372e0386d444d42cd9c Feb 23 10:24:42 crc kubenswrapper[4904]: I0223 10:24:42.553067 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"71f24a32-6e0a-4a39-9570-92c373672a9b","Type":"ContainerStarted","Data":"6531ac8a759c3c5fe5fb87ca4bf20711005cc61c74cd6e0ce369b2020074d1ef"} Feb 23 10:24:42 crc kubenswrapper[4904]: I0223 10:24:42.559207 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-rf6dz" event={"ID":"1ea2afdb-ef40-490c-9b9c-5acd41bcbee7","Type":"ContainerStarted","Data":"2dadf099050c5fde95b4ca2898c58548dd27a57c15431d100f50bd18a59c3857"} Feb 23 10:24:42 crc kubenswrapper[4904]: I0223 10:24:42.568216 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-65pr4" Feb 23 10:24:42 crc kubenswrapper[4904]: I0223 10:24:42.568250 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-65pr4" event={"ID":"51d23998-f8e9-45bc-9af7-f37221dc0390","Type":"ContainerDied","Data":"22ce1c374fe443d47b97bd49505383e2238cad72de5d3df05c096785441eaed1"} Feb 23 10:24:42 crc kubenswrapper[4904]: I0223 10:24:42.568326 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22ce1c374fe443d47b97bd49505383e2238cad72de5d3df05c096785441eaed1" Feb 23 10:24:42 crc kubenswrapper[4904]: I0223 10:24:42.574901 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3b16-account-create-update-z7z8t" event={"ID":"40418ddd-1e86-4037-b7fc-8f2ff25f6b3c","Type":"ContainerDied","Data":"bc293b9a265fbe68cf9b99e5fad8552f798b0a6c4096419dff63ae9542937d39"} Feb 23 10:24:42 crc kubenswrapper[4904]: I0223 10:24:42.574964 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc293b9a265fbe68cf9b99e5fad8552f798b0a6c4096419dff63ae9542937d39" Feb 23 10:24:42 crc kubenswrapper[4904]: I0223 10:24:42.574914 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3b16-account-create-update-z7z8t" Feb 23 10:24:42 crc kubenswrapper[4904]: E0223 10:24:42.578215 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone:current-podified\\\"\"" pod="openstack/keystone-db-sync-kt929" podUID="9d1a74dc-f43f-4b29-ab12-8d5169a4b69d" Feb 23 10:24:42 crc kubenswrapper[4904]: I0223 10:24:42.599690 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-rf6dz" podStartSLOduration=2.3670946170000002 podStartE2EDuration="19.599650359s" podCreationTimestamp="2026-02-23 10:24:23 +0000 UTC" firstStartedPulling="2026-02-23 10:24:24.867773676 +0000 UTC m=+1098.288147189" lastFinishedPulling="2026-02-23 10:24:42.100329418 +0000 UTC m=+1115.520702931" observedRunningTime="2026-02-23 10:24:42.587852434 +0000 UTC m=+1116.008225957" watchObservedRunningTime="2026-02-23 10:24:42.599650359 +0000 UTC m=+1116.020023872" Feb 23 10:24:43 crc kubenswrapper[4904]: I0223 10:24:43.602020 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"71f24a32-6e0a-4a39-9570-92c373672a9b","Type":"ContainerStarted","Data":"a70d9bb82fd2b2ba10f4997f48c34b5b09f9d2e570a8d97c7925be4a4616e6ab"} Feb 23 10:24:43 crc kubenswrapper[4904]: I0223 10:24:43.602540 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"71f24a32-6e0a-4a39-9570-92c373672a9b","Type":"ContainerStarted","Data":"ae3f9721fdf655e07b8b582630c5b690122d30c08169628b60aab01fdb28d759"} Feb 23 10:24:43 crc kubenswrapper[4904]: I0223 10:24:43.602568 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"71f24a32-6e0a-4a39-9570-92c373672a9b","Type":"ContainerStarted","Data":"b7232234d4a8bfdae89eaa7a1ecadf7d8bd259b193f02c1013627631a90726cb"} Feb 23 10:24:43 crc kubenswrapper[4904]: I0223 10:24:43.605890 4904 generic.go:334] "Generic (PLEG): container finished" podID="87ac5c77-0a8d-4041-b030-6053e617dbfe" containerID="cbc7c6a0747f9b82091cc1810ea3a02c744577f379da5d06add2a94640ca5fc0" exitCode=0 Feb 23 10:24:43 crc kubenswrapper[4904]: I0223 10:24:43.605925 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" event={"ID":"87ac5c77-0a8d-4041-b030-6053e617dbfe","Type":"ContainerDied","Data":"cbc7c6a0747f9b82091cc1810ea3a02c744577f379da5d06add2a94640ca5fc0"} Feb 23 10:24:43 crc kubenswrapper[4904]: I0223 10:24:43.605969 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" event={"ID":"87ac5c77-0a8d-4041-b030-6053e617dbfe","Type":"ContainerStarted","Data":"e621f0f5dab03a0f66ce483901d18546ffc227239fbbf372e0386d444d42cd9c"} Feb 23 10:24:44 crc kubenswrapper[4904]: I0223 10:24:44.631752 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" event={"ID":"87ac5c77-0a8d-4041-b030-6053e617dbfe","Type":"ContainerStarted","Data":"ff3aa6cf49db02177cc8f44911f449ffb437c61fdcf076d31bbe011f22681068"} Feb 23 10:24:44 crc kubenswrapper[4904]: I0223 10:24:44.633050 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" Feb 23 10:24:44 crc kubenswrapper[4904]: I0223 10:24:44.662561 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" podStartSLOduration=7.662535058 podStartE2EDuration="7.662535058s" podCreationTimestamp="2026-02-23 10:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:24:44.65381112 +0000 UTC m=+1118.074184633" watchObservedRunningTime="2026-02-23 10:24:44.662535058 +0000 UTC m=+1118.082908571" Feb 23 10:24:45 crc kubenswrapper[4904]: I0223 10:24:45.673229 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"71f24a32-6e0a-4a39-9570-92c373672a9b","Type":"ContainerStarted","Data":"eb189649e68aec61f349f01cb9342e9a8e23c7e17ef2f253c109b44c6a9227a0"} Feb 23 10:24:45 crc kubenswrapper[4904]: I0223 10:24:45.673906 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"71f24a32-6e0a-4a39-9570-92c373672a9b","Type":"ContainerStarted","Data":"819598efe8bff1b4017841e8a859d7f68985ff9de6b6b5bb27ffb3aec7758aea"} Feb 23 10:24:45 crc kubenswrapper[4904]: I0223 10:24:45.673926 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"71f24a32-6e0a-4a39-9570-92c373672a9b","Type":"ContainerStarted","Data":"99f28e059e3eeb15854323c7a0996a77efd1cff97107c71b2a51f69574e47161"} Feb 23 10:24:45 crc kubenswrapper[4904]: I0223 10:24:45.673941 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"71f24a32-6e0a-4a39-9570-92c373672a9b","Type":"ContainerStarted","Data":"ac19145181931d2e721fcc9620e55d1baf01136864ecbc6da2f1ff330728d7c9"} Feb 23 10:24:45 crc kubenswrapper[4904]: I0223 10:24:45.673954 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"71f24a32-6e0a-4a39-9570-92c373672a9b","Type":"ContainerStarted","Data":"1fdc48664e0bb422188ffc30fe092692d04a7fc9a242b04a1bab31d0644151a8"} Feb 23 10:24:45 crc kubenswrapper[4904]: I0223 10:24:45.675662 4904 generic.go:334] "Generic (PLEG): container finished" podID="1ea2afdb-ef40-490c-9b9c-5acd41bcbee7" containerID="2dadf099050c5fde95b4ca2898c58548dd27a57c15431d100f50bd18a59c3857" exitCode=0 Feb 23 10:24:45 crc kubenswrapper[4904]: I0223 10:24:45.676476 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-rf6dz" event={"ID":"1ea2afdb-ef40-490c-9b9c-5acd41bcbee7","Type":"ContainerDied","Data":"2dadf099050c5fde95b4ca2898c58548dd27a57c15431d100f50bd18a59c3857"} Feb 23 10:24:46 crc kubenswrapper[4904]: I0223 10:24:46.703556 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"71f24a32-6e0a-4a39-9570-92c373672a9b","Type":"ContainerStarted","Data":"5af8f4b9eafa94a45f1cdf729679812074883c52b5962804f47eaf2805a3f7b1"} Feb 23 10:24:46 crc kubenswrapper[4904]: I0223 10:24:46.706012 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"71f24a32-6e0a-4a39-9570-92c373672a9b","Type":"ContainerStarted","Data":"5361e39963e191a1003ca80da3833f668ebf339d2c267bb829b0d9df778023f2"} Feb 23 10:24:46 crc kubenswrapper[4904]: I0223 10:24:46.753102 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.42297769 podStartE2EDuration="1m8.753076233s" podCreationTimestamp="2026-02-23 10:23:38 +0000 UTC" firstStartedPulling="2026-02-23 10:24:12.225559977 +0000 UTC m=+1085.645933500" lastFinishedPulling="2026-02-23 10:24:44.55565853 +0000 UTC m=+1117.976032043" observedRunningTime="2026-02-23 10:24:46.749148802 +0000 UTC m=+1120.169522335" watchObservedRunningTime="2026-02-23 10:24:46.753076233 +0000 UTC m=+1120.173449746" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.062272 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-srlj6"] Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.105799 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-g79dn"] Feb 23 10:24:47 crc kubenswrapper[4904]: E0223 10:24:47.106380 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40418ddd-1e86-4037-b7fc-8f2ff25f6b3c" containerName="mariadb-account-create-update" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.106403 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="40418ddd-1e86-4037-b7fc-8f2ff25f6b3c" containerName="mariadb-account-create-update" Feb 23 10:24:47 crc kubenswrapper[4904]: E0223 10:24:47.106449 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d23998-f8e9-45bc-9af7-f37221dc0390" containerName="mariadb-database-create" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.106457 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d23998-f8e9-45bc-9af7-f37221dc0390" containerName="mariadb-database-create" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.106690 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="51d23998-f8e9-45bc-9af7-f37221dc0390" containerName="mariadb-database-create" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.106754 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="40418ddd-1e86-4037-b7fc-8f2ff25f6b3c" containerName="mariadb-account-create-update" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.108178 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.109940 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.122863 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-g79dn"] Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.162456 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-rf6dz" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.196810 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-g79dn\" (UID: \"300121d9-ca54-432f-b210-a3bb1df0f8fc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.196967 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-config\") pod \"dnsmasq-dns-74f6bcbc87-g79dn\" (UID: \"300121d9-ca54-432f-b210-a3bb1df0f8fc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.197030 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxjp9\" (UniqueName: \"kubernetes.io/projected/300121d9-ca54-432f-b210-a3bb1df0f8fc-kube-api-access-bxjp9\") pod \"dnsmasq-dns-74f6bcbc87-g79dn\" (UID: \"300121d9-ca54-432f-b210-a3bb1df0f8fc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.197083 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-g79dn\" (UID: \"300121d9-ca54-432f-b210-a3bb1df0f8fc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.197134 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-g79dn\" (UID: \"300121d9-ca54-432f-b210-a3bb1df0f8fc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.197176 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-g79dn\" (UID: \"300121d9-ca54-432f-b210-a3bb1df0f8fc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.298359 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7-config-data\") pod \"1ea2afdb-ef40-490c-9b9c-5acd41bcbee7\" (UID: \"1ea2afdb-ef40-490c-9b9c-5acd41bcbee7\") " Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.298834 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7-combined-ca-bundle\") pod \"1ea2afdb-ef40-490c-9b9c-5acd41bcbee7\" (UID: \"1ea2afdb-ef40-490c-9b9c-5acd41bcbee7\") " Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.298912 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7-db-sync-config-data\") pod \"1ea2afdb-ef40-490c-9b9c-5acd41bcbee7\" (UID: \"1ea2afdb-ef40-490c-9b9c-5acd41bcbee7\") " Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.299046 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5wn8\" (UniqueName: \"kubernetes.io/projected/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7-kube-api-access-k5wn8\") pod \"1ea2afdb-ef40-490c-9b9c-5acd41bcbee7\" (UID: \"1ea2afdb-ef40-490c-9b9c-5acd41bcbee7\") " Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.299290 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-g79dn\" (UID: \"300121d9-ca54-432f-b210-a3bb1df0f8fc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.299337 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-g79dn\" (UID: \"300121d9-ca54-432f-b210-a3bb1df0f8fc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.299375 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-g79dn\" (UID: \"300121d9-ca54-432f-b210-a3bb1df0f8fc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.299436 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-g79dn\" (UID: \"300121d9-ca54-432f-b210-a3bb1df0f8fc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.299506 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-config\") pod \"dnsmasq-dns-74f6bcbc87-g79dn\" (UID: \"300121d9-ca54-432f-b210-a3bb1df0f8fc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.299538 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxjp9\" (UniqueName: \"kubernetes.io/projected/300121d9-ca54-432f-b210-a3bb1df0f8fc-kube-api-access-bxjp9\") pod \"dnsmasq-dns-74f6bcbc87-g79dn\" (UID: \"300121d9-ca54-432f-b210-a3bb1df0f8fc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.301048 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-g79dn\" (UID: \"300121d9-ca54-432f-b210-a3bb1df0f8fc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.301254 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-g79dn\" (UID: \"300121d9-ca54-432f-b210-a3bb1df0f8fc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.301318 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-g79dn\" (UID: \"300121d9-ca54-432f-b210-a3bb1df0f8fc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.301823 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-config\") pod \"dnsmasq-dns-74f6bcbc87-g79dn\" (UID: \"300121d9-ca54-432f-b210-a3bb1df0f8fc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.303180 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-g79dn\" (UID: \"300121d9-ca54-432f-b210-a3bb1df0f8fc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.308914 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7-kube-api-access-k5wn8" (OuterVolumeSpecName: "kube-api-access-k5wn8") pod "1ea2afdb-ef40-490c-9b9c-5acd41bcbee7" (UID: "1ea2afdb-ef40-490c-9b9c-5acd41bcbee7"). InnerVolumeSpecName "kube-api-access-k5wn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.317040 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1ea2afdb-ef40-490c-9b9c-5acd41bcbee7" (UID: "1ea2afdb-ef40-490c-9b9c-5acd41bcbee7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.320873 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxjp9\" (UniqueName: \"kubernetes.io/projected/300121d9-ca54-432f-b210-a3bb1df0f8fc-kube-api-access-bxjp9\") pod \"dnsmasq-dns-74f6bcbc87-g79dn\" (UID: \"300121d9-ca54-432f-b210-a3bb1df0f8fc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.347998 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ea2afdb-ef40-490c-9b9c-5acd41bcbee7" (UID: "1ea2afdb-ef40-490c-9b9c-5acd41bcbee7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.360351 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7-config-data" (OuterVolumeSpecName: "config-data") pod "1ea2afdb-ef40-490c-9b9c-5acd41bcbee7" (UID: "1ea2afdb-ef40-490c-9b9c-5acd41bcbee7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.398247 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.398336 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.401426 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5wn8\" (UniqueName: \"kubernetes.io/projected/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7-kube-api-access-k5wn8\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.401465 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.401477 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.401489 4904 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.493876 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.756935 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-rf6dz" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.758994 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-rf6dz" event={"ID":"1ea2afdb-ef40-490c-9b9c-5acd41bcbee7","Type":"ContainerDied","Data":"8fd841841b8ff06239eff3b2632c7988c180ec617776a66e9d372fbbedba1258"} Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.759057 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fd841841b8ff06239eff3b2632c7988c180ec617776a66e9d372fbbedba1258" Feb 23 10:24:47 crc kubenswrapper[4904]: I0223 10:24:47.759335 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" podUID="87ac5c77-0a8d-4041-b030-6053e617dbfe" containerName="dnsmasq-dns" containerID="cri-o://ff3aa6cf49db02177cc8f44911f449ffb437c61fdcf076d31bbe011f22681068" gracePeriod=10 Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.191760 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-g79dn"] Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.364371 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.428637 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87ac5c77-0a8d-4041-b030-6053e617dbfe-dns-svc\") pod \"87ac5c77-0a8d-4041-b030-6053e617dbfe\" (UID: \"87ac5c77-0a8d-4041-b030-6053e617dbfe\") " Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.428778 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgpf8\" (UniqueName: \"kubernetes.io/projected/87ac5c77-0a8d-4041-b030-6053e617dbfe-kube-api-access-xgpf8\") pod \"87ac5c77-0a8d-4041-b030-6053e617dbfe\" (UID: \"87ac5c77-0a8d-4041-b030-6053e617dbfe\") " Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.428822 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87ac5c77-0a8d-4041-b030-6053e617dbfe-ovsdbserver-nb\") pod \"87ac5c77-0a8d-4041-b030-6053e617dbfe\" (UID: \"87ac5c77-0a8d-4041-b030-6053e617dbfe\") " Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.428944 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87ac5c77-0a8d-4041-b030-6053e617dbfe-ovsdbserver-sb\") pod \"87ac5c77-0a8d-4041-b030-6053e617dbfe\" (UID: \"87ac5c77-0a8d-4041-b030-6053e617dbfe\") " Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.428973 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87ac5c77-0a8d-4041-b030-6053e617dbfe-config\") pod \"87ac5c77-0a8d-4041-b030-6053e617dbfe\" (UID: \"87ac5c77-0a8d-4041-b030-6053e617dbfe\") " Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.435792 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87ac5c77-0a8d-4041-b030-6053e617dbfe-kube-api-access-xgpf8" (OuterVolumeSpecName: "kube-api-access-xgpf8") pod "87ac5c77-0a8d-4041-b030-6053e617dbfe" (UID: "87ac5c77-0a8d-4041-b030-6053e617dbfe"). InnerVolumeSpecName "kube-api-access-xgpf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.481165 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87ac5c77-0a8d-4041-b030-6053e617dbfe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "87ac5c77-0a8d-4041-b030-6053e617dbfe" (UID: "87ac5c77-0a8d-4041-b030-6053e617dbfe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.481279 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87ac5c77-0a8d-4041-b030-6053e617dbfe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "87ac5c77-0a8d-4041-b030-6053e617dbfe" (UID: "87ac5c77-0a8d-4041-b030-6053e617dbfe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.481853 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87ac5c77-0a8d-4041-b030-6053e617dbfe-config" (OuterVolumeSpecName: "config") pod "87ac5c77-0a8d-4041-b030-6053e617dbfe" (UID: "87ac5c77-0a8d-4041-b030-6053e617dbfe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.486571 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87ac5c77-0a8d-4041-b030-6053e617dbfe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "87ac5c77-0a8d-4041-b030-6053e617dbfe" (UID: "87ac5c77-0a8d-4041-b030-6053e617dbfe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.531545 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87ac5c77-0a8d-4041-b030-6053e617dbfe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.531591 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87ac5c77-0a8d-4041-b030-6053e617dbfe-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.531603 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87ac5c77-0a8d-4041-b030-6053e617dbfe-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.531618 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgpf8\" (UniqueName: \"kubernetes.io/projected/87ac5c77-0a8d-4041-b030-6053e617dbfe-kube-api-access-xgpf8\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.531630 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87ac5c77-0a8d-4041-b030-6053e617dbfe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.766894 4904 generic.go:334] "Generic (PLEG): container finished" podID="300121d9-ca54-432f-b210-a3bb1df0f8fc" containerID="0a13063e194e3f15012fa973b2060a23eab37f45aec9db453ef4bbc8099110e9" exitCode=0 Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.767448 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" event={"ID":"300121d9-ca54-432f-b210-a3bb1df0f8fc","Type":"ContainerDied","Data":"0a13063e194e3f15012fa973b2060a23eab37f45aec9db453ef4bbc8099110e9"} Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.767598 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" event={"ID":"300121d9-ca54-432f-b210-a3bb1df0f8fc","Type":"ContainerStarted","Data":"863618f7ecc955edc1f18b03be69d041ea49e354c8114132c81a8ad9d0bf211e"} Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.769808 4904 generic.go:334] "Generic (PLEG): container finished" podID="87ac5c77-0a8d-4041-b030-6053e617dbfe" containerID="ff3aa6cf49db02177cc8f44911f449ffb437c61fdcf076d31bbe011f22681068" exitCode=0 Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.769866 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" event={"ID":"87ac5c77-0a8d-4041-b030-6053e617dbfe","Type":"ContainerDied","Data":"ff3aa6cf49db02177cc8f44911f449ffb437c61fdcf076d31bbe011f22681068"} Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.769909 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" event={"ID":"87ac5c77-0a8d-4041-b030-6053e617dbfe","Type":"ContainerDied","Data":"e621f0f5dab03a0f66ce483901d18546ffc227239fbbf372e0386d444d42cd9c"} Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.769920 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-srlj6" Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.769931 4904 scope.go:117] "RemoveContainer" containerID="ff3aa6cf49db02177cc8f44911f449ffb437c61fdcf076d31bbe011f22681068" Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.820434 4904 scope.go:117] "RemoveContainer" containerID="cbc7c6a0747f9b82091cc1810ea3a02c744577f379da5d06add2a94640ca5fc0" Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.946137 4904 scope.go:117] "RemoveContainer" containerID="ff3aa6cf49db02177cc8f44911f449ffb437c61fdcf076d31bbe011f22681068" Feb 23 10:24:48 crc kubenswrapper[4904]: E0223 10:24:48.946736 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff3aa6cf49db02177cc8f44911f449ffb437c61fdcf076d31bbe011f22681068\": container with ID starting with ff3aa6cf49db02177cc8f44911f449ffb437c61fdcf076d31bbe011f22681068 not found: ID does not exist" containerID="ff3aa6cf49db02177cc8f44911f449ffb437c61fdcf076d31bbe011f22681068" Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.946774 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff3aa6cf49db02177cc8f44911f449ffb437c61fdcf076d31bbe011f22681068"} err="failed to get container status \"ff3aa6cf49db02177cc8f44911f449ffb437c61fdcf076d31bbe011f22681068\": rpc error: code = NotFound desc = could not find container \"ff3aa6cf49db02177cc8f44911f449ffb437c61fdcf076d31bbe011f22681068\": container with ID starting with ff3aa6cf49db02177cc8f44911f449ffb437c61fdcf076d31bbe011f22681068 not found: ID does not exist" Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.946801 4904 scope.go:117] "RemoveContainer" containerID="cbc7c6a0747f9b82091cc1810ea3a02c744577f379da5d06add2a94640ca5fc0" Feb 23 10:24:48 crc kubenswrapper[4904]: E0223 10:24:48.947339 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbc7c6a0747f9b82091cc1810ea3a02c744577f379da5d06add2a94640ca5fc0\": container with ID starting with cbc7c6a0747f9b82091cc1810ea3a02c744577f379da5d06add2a94640ca5fc0 not found: ID does not exist" containerID="cbc7c6a0747f9b82091cc1810ea3a02c744577f379da5d06add2a94640ca5fc0" Feb 23 10:24:48 crc kubenswrapper[4904]: I0223 10:24:48.947363 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbc7c6a0747f9b82091cc1810ea3a02c744577f379da5d06add2a94640ca5fc0"} err="failed to get container status \"cbc7c6a0747f9b82091cc1810ea3a02c744577f379da5d06add2a94640ca5fc0\": rpc error: code = NotFound desc = could not find container \"cbc7c6a0747f9b82091cc1810ea3a02c744577f379da5d06add2a94640ca5fc0\": container with ID starting with cbc7c6a0747f9b82091cc1810ea3a02c744577f379da5d06add2a94640ca5fc0 not found: ID does not exist" Feb 23 10:24:49 crc kubenswrapper[4904]: I0223 10:24:49.006925 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-srlj6"] Feb 23 10:24:49 crc kubenswrapper[4904]: I0223 10:24:49.043215 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-srlj6"] Feb 23 10:24:49 crc kubenswrapper[4904]: I0223 10:24:49.270730 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87ac5c77-0a8d-4041-b030-6053e617dbfe" path="/var/lib/kubelet/pods/87ac5c77-0a8d-4041-b030-6053e617dbfe/volumes" Feb 23 10:24:49 crc kubenswrapper[4904]: I0223 10:24:49.786132 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" event={"ID":"300121d9-ca54-432f-b210-a3bb1df0f8fc","Type":"ContainerStarted","Data":"88a7988dd78e1bce6e164eff7425ecaed2b86ad3bc6521030442f27032dcaa2b"} Feb 23 10:24:49 crc kubenswrapper[4904]: I0223 10:24:49.786339 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" Feb 23 10:24:49 crc kubenswrapper[4904]: I0223 10:24:49.827558 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" podStartSLOduration=2.8275337130000002 podStartE2EDuration="2.827533713s" podCreationTimestamp="2026-02-23 10:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:24:49.81582125 +0000 UTC m=+1123.236194773" watchObservedRunningTime="2026-02-23 10:24:49.827533713 +0000 UTC m=+1123.247907216" Feb 23 10:24:56 crc kubenswrapper[4904]: I0223 10:24:56.872798 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kt929" event={"ID":"9d1a74dc-f43f-4b29-ab12-8d5169a4b69d","Type":"ContainerStarted","Data":"4898a9a31fdf05f877c11d0498e089985fbf61fc7c151d9f0e04ec1dcadefac7"} Feb 23 10:24:56 crc kubenswrapper[4904]: I0223 10:24:56.901183 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-kt929" podStartSLOduration=4.674536358 podStartE2EDuration="33.901149451s" podCreationTimestamp="2026-02-23 10:24:23 +0000 UTC" firstStartedPulling="2026-02-23 10:24:26.445914319 +0000 UTC m=+1099.866287832" lastFinishedPulling="2026-02-23 10:24:55.672527402 +0000 UTC m=+1129.092900925" observedRunningTime="2026-02-23 10:24:56.893756961 +0000 UTC m=+1130.314130474" watchObservedRunningTime="2026-02-23 10:24:56.901149451 +0000 UTC m=+1130.321522974" Feb 23 10:24:57 crc kubenswrapper[4904]: I0223 10:24:57.496211 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" Feb 23 10:24:57 crc kubenswrapper[4904]: I0223 10:24:57.629612 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-89b5g"] Feb 23 10:24:57 crc kubenswrapper[4904]: I0223 10:24:57.630354 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-89b5g" podUID="b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd" containerName="dnsmasq-dns" containerID="cri-o://8e0c0c2ee0fed55ae446a6a3f38e25cf313f5f0659b65ccd8a05cb1c83891807" gracePeriod=10 Feb 23 10:24:57 crc kubenswrapper[4904]: I0223 10:24:57.884854 4904 generic.go:334] "Generic (PLEG): container finished" podID="b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd" containerID="8e0c0c2ee0fed55ae446a6a3f38e25cf313f5f0659b65ccd8a05cb1c83891807" exitCode=0 Feb 23 10:24:57 crc kubenswrapper[4904]: I0223 10:24:57.885211 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-89b5g" event={"ID":"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd","Type":"ContainerDied","Data":"8e0c0c2ee0fed55ae446a6a3f38e25cf313f5f0659b65ccd8a05cb1c83891807"} Feb 23 10:24:58 crc kubenswrapper[4904]: I0223 10:24:58.105177 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-89b5g" Feb 23 10:24:58 crc kubenswrapper[4904]: I0223 10:24:58.255456 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-ovsdbserver-sb\") pod \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\" (UID: \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\") " Feb 23 10:24:58 crc kubenswrapper[4904]: I0223 10:24:58.255600 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-ovsdbserver-nb\") pod \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\" (UID: \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\") " Feb 23 10:24:58 crc kubenswrapper[4904]: I0223 10:24:58.255644 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-dns-svc\") pod \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\" (UID: \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\") " Feb 23 10:24:58 crc kubenswrapper[4904]: I0223 10:24:58.255734 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-config\") pod \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\" (UID: \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\") " Feb 23 10:24:58 crc kubenswrapper[4904]: I0223 10:24:58.255771 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fg6x\" (UniqueName: \"kubernetes.io/projected/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-kube-api-access-8fg6x\") pod \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\" (UID: \"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd\") " Feb 23 10:24:58 crc kubenswrapper[4904]: I0223 10:24:58.263017 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-kube-api-access-8fg6x" (OuterVolumeSpecName: "kube-api-access-8fg6x") pod "b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd" (UID: "b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd"). InnerVolumeSpecName "kube-api-access-8fg6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:24:58 crc kubenswrapper[4904]: I0223 10:24:58.307647 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd" (UID: "b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:58 crc kubenswrapper[4904]: I0223 10:24:58.312327 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd" (UID: "b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:58 crc kubenswrapper[4904]: I0223 10:24:58.327252 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-config" (OuterVolumeSpecName: "config") pod "b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd" (UID: "b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:58 crc kubenswrapper[4904]: I0223 10:24:58.330228 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd" (UID: "b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:24:58 crc kubenswrapper[4904]: I0223 10:24:58.358222 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:58 crc kubenswrapper[4904]: I0223 10:24:58.358276 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:58 crc kubenswrapper[4904]: I0223 10:24:58.358291 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:58 crc kubenswrapper[4904]: I0223 10:24:58.358304 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:58 crc kubenswrapper[4904]: I0223 10:24:58.358315 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fg6x\" (UniqueName: \"kubernetes.io/projected/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd-kube-api-access-8fg6x\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:58 crc kubenswrapper[4904]: I0223 10:24:58.896424 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-89b5g" event={"ID":"b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd","Type":"ContainerDied","Data":"2aad8aec7e2d52177876a89035e795115a15bb7eb92cd933f85f628b58b976ef"} Feb 23 10:24:58 crc kubenswrapper[4904]: I0223 10:24:58.897195 4904 scope.go:117] "RemoveContainer" containerID="8e0c0c2ee0fed55ae446a6a3f38e25cf313f5f0659b65ccd8a05cb1c83891807" Feb 23 10:24:58 crc kubenswrapper[4904]: I0223 10:24:58.896691 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-89b5g" Feb 23 10:24:58 crc kubenswrapper[4904]: I0223 10:24:58.899187 4904 generic.go:334] "Generic (PLEG): container finished" podID="9d1a74dc-f43f-4b29-ab12-8d5169a4b69d" containerID="4898a9a31fdf05f877c11d0498e089985fbf61fc7c151d9f0e04ec1dcadefac7" exitCode=0 Feb 23 10:24:58 crc kubenswrapper[4904]: I0223 10:24:58.899238 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kt929" event={"ID":"9d1a74dc-f43f-4b29-ab12-8d5169a4b69d","Type":"ContainerDied","Data":"4898a9a31fdf05f877c11d0498e089985fbf61fc7c151d9f0e04ec1dcadefac7"} Feb 23 10:24:58 crc kubenswrapper[4904]: I0223 10:24:58.931484 4904 scope.go:117] "RemoveContainer" containerID="601e87cb8e0a01d692d92986aeafa11c631326d911a94e200e580536d62df253" Feb 23 10:24:58 crc kubenswrapper[4904]: I0223 10:24:58.958485 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-89b5g"] Feb 23 10:24:58 crc kubenswrapper[4904]: I0223 10:24:58.969928 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-89b5g"] Feb 23 10:24:59 crc kubenswrapper[4904]: I0223 10:24:59.271545 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd" path="/var/lib/kubelet/pods/b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd/volumes" Feb 23 10:25:00 crc kubenswrapper[4904]: I0223 10:25:00.374563 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kt929" Feb 23 10:25:00 crc kubenswrapper[4904]: I0223 10:25:00.506854 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d1a74dc-f43f-4b29-ab12-8d5169a4b69d-combined-ca-bundle\") pod \"9d1a74dc-f43f-4b29-ab12-8d5169a4b69d\" (UID: \"9d1a74dc-f43f-4b29-ab12-8d5169a4b69d\") " Feb 23 10:25:00 crc kubenswrapper[4904]: I0223 10:25:00.506938 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d1a74dc-f43f-4b29-ab12-8d5169a4b69d-config-data\") pod \"9d1a74dc-f43f-4b29-ab12-8d5169a4b69d\" (UID: \"9d1a74dc-f43f-4b29-ab12-8d5169a4b69d\") " Feb 23 10:25:00 crc kubenswrapper[4904]: I0223 10:25:00.507245 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt5jp\" (UniqueName: \"kubernetes.io/projected/9d1a74dc-f43f-4b29-ab12-8d5169a4b69d-kube-api-access-pt5jp\") pod \"9d1a74dc-f43f-4b29-ab12-8d5169a4b69d\" (UID: \"9d1a74dc-f43f-4b29-ab12-8d5169a4b69d\") " Feb 23 10:25:00 crc kubenswrapper[4904]: I0223 10:25:00.514744 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d1a74dc-f43f-4b29-ab12-8d5169a4b69d-kube-api-access-pt5jp" (OuterVolumeSpecName: "kube-api-access-pt5jp") pod "9d1a74dc-f43f-4b29-ab12-8d5169a4b69d" (UID: "9d1a74dc-f43f-4b29-ab12-8d5169a4b69d"). InnerVolumeSpecName "kube-api-access-pt5jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:25:00 crc kubenswrapper[4904]: I0223 10:25:00.536203 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d1a74dc-f43f-4b29-ab12-8d5169a4b69d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d1a74dc-f43f-4b29-ab12-8d5169a4b69d" (UID: "9d1a74dc-f43f-4b29-ab12-8d5169a4b69d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:00 crc kubenswrapper[4904]: I0223 10:25:00.550951 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d1a74dc-f43f-4b29-ab12-8d5169a4b69d-config-data" (OuterVolumeSpecName: "config-data") pod "9d1a74dc-f43f-4b29-ab12-8d5169a4b69d" (UID: "9d1a74dc-f43f-4b29-ab12-8d5169a4b69d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:00 crc kubenswrapper[4904]: I0223 10:25:00.609738 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d1a74dc-f43f-4b29-ab12-8d5169a4b69d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:00 crc kubenswrapper[4904]: I0223 10:25:00.609786 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d1a74dc-f43f-4b29-ab12-8d5169a4b69d-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:00 crc kubenswrapper[4904]: I0223 10:25:00.609804 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt5jp\" (UniqueName: \"kubernetes.io/projected/9d1a74dc-f43f-4b29-ab12-8d5169a4b69d-kube-api-access-pt5jp\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:00 crc kubenswrapper[4904]: I0223 10:25:00.937677 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kt929" event={"ID":"9d1a74dc-f43f-4b29-ab12-8d5169a4b69d","Type":"ContainerDied","Data":"a8363311105e6e2d95a04c1d4feaaf58ff9e63d9e421e549b393386b5b2a839e"} Feb 23 10:25:00 crc kubenswrapper[4904]: I0223 10:25:00.938475 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8363311105e6e2d95a04c1d4feaaf58ff9e63d9e421e549b393386b5b2a839e" Feb 23 10:25:00 crc kubenswrapper[4904]: I0223 10:25:00.937836 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kt929" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.279767 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-m47lx"] Feb 23 10:25:01 crc kubenswrapper[4904]: E0223 10:25:01.280329 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d1a74dc-f43f-4b29-ab12-8d5169a4b69d" containerName="keystone-db-sync" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.280355 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d1a74dc-f43f-4b29-ab12-8d5169a4b69d" containerName="keystone-db-sync" Feb 23 10:25:01 crc kubenswrapper[4904]: E0223 10:25:01.280373 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87ac5c77-0a8d-4041-b030-6053e617dbfe" containerName="dnsmasq-dns" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.280381 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="87ac5c77-0a8d-4041-b030-6053e617dbfe" containerName="dnsmasq-dns" Feb 23 10:25:01 crc kubenswrapper[4904]: E0223 10:25:01.280400 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd" containerName="dnsmasq-dns" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.280412 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd" containerName="dnsmasq-dns" Feb 23 10:25:01 crc kubenswrapper[4904]: E0223 10:25:01.280444 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea2afdb-ef40-490c-9b9c-5acd41bcbee7" containerName="watcher-db-sync" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.280453 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea2afdb-ef40-490c-9b9c-5acd41bcbee7" containerName="watcher-db-sync" Feb 23 10:25:01 crc kubenswrapper[4904]: E0223 10:25:01.280464 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd" containerName="init" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.280471 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd" containerName="init" Feb 23 10:25:01 crc kubenswrapper[4904]: E0223 10:25:01.280484 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87ac5c77-0a8d-4041-b030-6053e617dbfe" containerName="init" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.280489 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="87ac5c77-0a8d-4041-b030-6053e617dbfe" containerName="init" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.284940 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="87ac5c77-0a8d-4041-b030-6053e617dbfe" containerName="dnsmasq-dns" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.284999 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea2afdb-ef40-490c-9b9c-5acd41bcbee7" containerName="watcher-db-sync" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.285017 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6fa3b9b-3fd0-43c1-8f5c-715245ffeadd" containerName="dnsmasq-dns" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.285034 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d1a74dc-f43f-4b29-ab12-8d5169a4b69d" containerName="keystone-db-sync" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.286030 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m47lx" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.307115 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.308467 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.309352 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-c598d" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.311620 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.334147 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.357816 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-m47lx"] Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.414105 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrjkl\" (UniqueName: \"kubernetes.io/projected/6cf70849-f56a-47a0-a26d-a3840f9d314b-kube-api-access-vrjkl\") pod \"keystone-bootstrap-m47lx\" (UID: \"6cf70849-f56a-47a0-a26d-a3840f9d314b\") " pod="openstack/keystone-bootstrap-m47lx" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.414211 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-combined-ca-bundle\") pod \"keystone-bootstrap-m47lx\" (UID: \"6cf70849-f56a-47a0-a26d-a3840f9d314b\") " pod="openstack/keystone-bootstrap-m47lx" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.414290 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-scripts\") pod \"keystone-bootstrap-m47lx\" (UID: \"6cf70849-f56a-47a0-a26d-a3840f9d314b\") " pod="openstack/keystone-bootstrap-m47lx" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.414424 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-fernet-keys\") pod \"keystone-bootstrap-m47lx\" (UID: \"6cf70849-f56a-47a0-a26d-a3840f9d314b\") " pod="openstack/keystone-bootstrap-m47lx" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.414509 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-config-data\") pod \"keystone-bootstrap-m47lx\" (UID: \"6cf70849-f56a-47a0-a26d-a3840f9d314b\") " pod="openstack/keystone-bootstrap-m47lx" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.414664 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-credential-keys\") pod \"keystone-bootstrap-m47lx\" (UID: \"6cf70849-f56a-47a0-a26d-a3840f9d314b\") " pod="openstack/keystone-bootstrap-m47lx" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.438480 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-bmswz"] Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.462607 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-bmswz" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.491850 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-bmswz"] Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.517495 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-credential-keys\") pod \"keystone-bootstrap-m47lx\" (UID: \"6cf70849-f56a-47a0-a26d-a3840f9d314b\") " pod="openstack/keystone-bootstrap-m47lx" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.517579 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrjkl\" (UniqueName: \"kubernetes.io/projected/6cf70849-f56a-47a0-a26d-a3840f9d314b-kube-api-access-vrjkl\") pod \"keystone-bootstrap-m47lx\" (UID: \"6cf70849-f56a-47a0-a26d-a3840f9d314b\") " pod="openstack/keystone-bootstrap-m47lx" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.517612 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-combined-ca-bundle\") pod \"keystone-bootstrap-m47lx\" (UID: \"6cf70849-f56a-47a0-a26d-a3840f9d314b\") " pod="openstack/keystone-bootstrap-m47lx" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.517644 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-scripts\") pod \"keystone-bootstrap-m47lx\" (UID: \"6cf70849-f56a-47a0-a26d-a3840f9d314b\") " pod="openstack/keystone-bootstrap-m47lx" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.517733 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-fernet-keys\") pod \"keystone-bootstrap-m47lx\" (UID: \"6cf70849-f56a-47a0-a26d-a3840f9d314b\") " pod="openstack/keystone-bootstrap-m47lx" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.517778 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-config-data\") pod \"keystone-bootstrap-m47lx\" (UID: \"6cf70849-f56a-47a0-a26d-a3840f9d314b\") " pod="openstack/keystone-bootstrap-m47lx" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.531594 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.533136 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.549994 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-config-data\") pod \"keystone-bootstrap-m47lx\" (UID: \"6cf70849-f56a-47a0-a26d-a3840f9d314b\") " pod="openstack/keystone-bootstrap-m47lx" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.551585 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.556616 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.552126 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-fernet-keys\") pod \"keystone-bootstrap-m47lx\" (UID: \"6cf70849-f56a-47a0-a26d-a3840f9d314b\") " pod="openstack/keystone-bootstrap-m47lx" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.563244 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-credential-keys\") pod \"keystone-bootstrap-m47lx\" (UID: \"6cf70849-f56a-47a0-a26d-a3840f9d314b\") " pod="openstack/keystone-bootstrap-m47lx" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.578422 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.578657 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-djcln" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.578976 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.597268 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.598532 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-scripts\") pod \"keystone-bootstrap-m47lx\" (UID: \"6cf70849-f56a-47a0-a26d-a3840f9d314b\") " pod="openstack/keystone-bootstrap-m47lx" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.599397 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrjkl\" (UniqueName: \"kubernetes.io/projected/6cf70849-f56a-47a0-a26d-a3840f9d314b-kube-api-access-vrjkl\") pod \"keystone-bootstrap-m47lx\" (UID: \"6cf70849-f56a-47a0-a26d-a3840f9d314b\") " pod="openstack/keystone-bootstrap-m47lx" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.600516 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-combined-ca-bundle\") pod \"keystone-bootstrap-m47lx\" (UID: \"6cf70849-f56a-47a0-a26d-a3840f9d314b\") " pod="openstack/keystone-bootstrap-m47lx" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.626092 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-bmswz\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " pod="openstack/dnsmasq-dns-847c4cc679-bmswz" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.626179 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7sdz\" (UniqueName: \"kubernetes.io/projected/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-kube-api-access-q7sdz\") pod \"dnsmasq-dns-847c4cc679-bmswz\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " pod="openstack/dnsmasq-dns-847c4cc679-bmswz" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.626223 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-bmswz\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " pod="openstack/dnsmasq-dns-847c4cc679-bmswz" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.626252 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-config\") pod \"dnsmasq-dns-847c4cc679-bmswz\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " pod="openstack/dnsmasq-dns-847c4cc679-bmswz" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.626284 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-dns-svc\") pod \"dnsmasq-dns-847c4cc679-bmswz\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " pod="openstack/dnsmasq-dns-847c4cc679-bmswz" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.626300 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-bmswz\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " pod="openstack/dnsmasq-dns-847c4cc679-bmswz" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.658154 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.659613 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.677119 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.677849 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.693372 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-544c54c55c-96gjr"] Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.694889 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-544c54c55c-96gjr" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.696523 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.696976 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.697184 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.698043 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-mpdx9" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.722073 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.733927 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a9517bd-8744-42d5-b058-6376f9294bfc-config-data\") pod \"watcher-applier-0\" (UID: \"4a9517bd-8744-42d5-b058-6376f9294bfc\") " pod="openstack/watcher-applier-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.733991 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-config\") pod \"dnsmasq-dns-847c4cc679-bmswz\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " pod="openstack/dnsmasq-dns-847c4cc679-bmswz" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.734025 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a9517bd-8744-42d5-b058-6376f9294bfc-logs\") pod \"watcher-applier-0\" (UID: \"4a9517bd-8744-42d5-b058-6376f9294bfc\") " pod="openstack/watcher-applier-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.734050 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-dns-svc\") pod \"dnsmasq-dns-847c4cc679-bmswz\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " pod="openstack/dnsmasq-dns-847c4cc679-bmswz" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.734067 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-bmswz\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " pod="openstack/dnsmasq-dns-847c4cc679-bmswz" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.734087 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db916860-b894-4062-a47e-9ca1b6cd8651-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"db916860-b894-4062-a47e-9ca1b6cd8651\") " pod="openstack/watcher-api-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.734112 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9knzj\" (UniqueName: \"kubernetes.io/projected/4a9517bd-8744-42d5-b058-6376f9294bfc-kube-api-access-9knzj\") pod \"watcher-applier-0\" (UID: \"4a9517bd-8744-42d5-b058-6376f9294bfc\") " pod="openstack/watcher-applier-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.734146 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/db916860-b894-4062-a47e-9ca1b6cd8651-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"db916860-b894-4062-a47e-9ca1b6cd8651\") " pod="openstack/watcher-api-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.734179 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-bmswz\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " pod="openstack/dnsmasq-dns-847c4cc679-bmswz" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.734205 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlblm\" (UniqueName: \"kubernetes.io/projected/db916860-b894-4062-a47e-9ca1b6cd8651-kube-api-access-tlblm\") pod \"watcher-api-0\" (UID: \"db916860-b894-4062-a47e-9ca1b6cd8651\") " pod="openstack/watcher-api-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.734243 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db916860-b894-4062-a47e-9ca1b6cd8651-logs\") pod \"watcher-api-0\" (UID: \"db916860-b894-4062-a47e-9ca1b6cd8651\") " pod="openstack/watcher-api-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.734257 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db916860-b894-4062-a47e-9ca1b6cd8651-config-data\") pod \"watcher-api-0\" (UID: \"db916860-b894-4062-a47e-9ca1b6cd8651\") " pod="openstack/watcher-api-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.734277 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7sdz\" (UniqueName: \"kubernetes.io/projected/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-kube-api-access-q7sdz\") pod \"dnsmasq-dns-847c4cc679-bmswz\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " pod="openstack/dnsmasq-dns-847c4cc679-bmswz" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.734306 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9517bd-8744-42d5-b058-6376f9294bfc-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"4a9517bd-8744-42d5-b058-6376f9294bfc\") " pod="openstack/watcher-applier-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.734326 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-bmswz\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " pod="openstack/dnsmasq-dns-847c4cc679-bmswz" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.735335 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-bmswz\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " pod="openstack/dnsmasq-dns-847c4cc679-bmswz" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.738014 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-config\") pod \"dnsmasq-dns-847c4cc679-bmswz\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " pod="openstack/dnsmasq-dns-847c4cc679-bmswz" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.738087 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-dns-svc\") pod \"dnsmasq-dns-847c4cc679-bmswz\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " pod="openstack/dnsmasq-dns-847c4cc679-bmswz" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.738651 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-bmswz\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " pod="openstack/dnsmasq-dns-847c4cc679-bmswz" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.738688 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-bmswz\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " pod="openstack/dnsmasq-dns-847c4cc679-bmswz" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.738739 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-wg2jr"] Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.740063 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wg2jr" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.750150 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.750338 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.750446 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-5xqx5" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.761603 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m47lx" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.795206 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7sdz\" (UniqueName: \"kubernetes.io/projected/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-kube-api-access-q7sdz\") pod \"dnsmasq-dns-847c4cc679-bmswz\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " pod="openstack/dnsmasq-dns-847c4cc679-bmswz" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.803976 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-544c54c55c-96gjr"] Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.819031 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-bmswz" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.833227 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-wg2jr"] Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.836449 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/539ac286-6fae-4923-b100-f1cd8946c2c2-config-data\") pod \"watcher-decision-engine-0\" (UID: \"539ac286-6fae-4923-b100-f1cd8946c2c2\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.836505 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/db916860-b894-4062-a47e-9ca1b6cd8651-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"db916860-b894-4062-a47e-9ca1b6cd8651\") " pod="openstack/watcher-api-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.836576 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlblm\" (UniqueName: \"kubernetes.io/projected/db916860-b894-4062-a47e-9ca1b6cd8651-kube-api-access-tlblm\") pod \"watcher-api-0\" (UID: \"db916860-b894-4062-a47e-9ca1b6cd8651\") " pod="openstack/watcher-api-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.836606 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db916860-b894-4062-a47e-9ca1b6cd8651-logs\") pod \"watcher-api-0\" (UID: \"db916860-b894-4062-a47e-9ca1b6cd8651\") " pod="openstack/watcher-api-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.847566 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db916860-b894-4062-a47e-9ca1b6cd8651-config-data\") pod \"watcher-api-0\" (UID: \"db916860-b894-4062-a47e-9ca1b6cd8651\") " pod="openstack/watcher-api-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.847708 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9517bd-8744-42d5-b058-6376f9294bfc-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"4a9517bd-8744-42d5-b058-6376f9294bfc\") " pod="openstack/watcher-applier-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.847795 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqhcp\" (UniqueName: \"kubernetes.io/projected/539ac286-6fae-4923-b100-f1cd8946c2c2-kube-api-access-hqhcp\") pod \"watcher-decision-engine-0\" (UID: \"539ac286-6fae-4923-b100-f1cd8946c2c2\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.847857 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a9517bd-8744-42d5-b058-6376f9294bfc-config-data\") pod \"watcher-applier-0\" (UID: \"4a9517bd-8744-42d5-b058-6376f9294bfc\") " pod="openstack/watcher-applier-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.847927 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w57fj\" (UniqueName: \"kubernetes.io/projected/1af33f06-f5a1-4478-9632-15433df32786-kube-api-access-w57fj\") pod \"horizon-544c54c55c-96gjr\" (UID: \"1af33f06-f5a1-4478-9632-15433df32786\") " pod="openstack/horizon-544c54c55c-96gjr" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.847988 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a9517bd-8744-42d5-b058-6376f9294bfc-logs\") pod \"watcher-applier-0\" (UID: \"4a9517bd-8744-42d5-b058-6376f9294bfc\") " pod="openstack/watcher-applier-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.848008 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1af33f06-f5a1-4478-9632-15433df32786-horizon-secret-key\") pod \"horizon-544c54c55c-96gjr\" (UID: \"1af33f06-f5a1-4478-9632-15433df32786\") " pod="openstack/horizon-544c54c55c-96gjr" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.848071 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db916860-b894-4062-a47e-9ca1b6cd8651-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"db916860-b894-4062-a47e-9ca1b6cd8651\") " pod="openstack/watcher-api-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.848099 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/539ac286-6fae-4923-b100-f1cd8946c2c2-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"539ac286-6fae-4923-b100-f1cd8946c2c2\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.848126 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1af33f06-f5a1-4478-9632-15433df32786-config-data\") pod \"horizon-544c54c55c-96gjr\" (UID: \"1af33f06-f5a1-4478-9632-15433df32786\") " pod="openstack/horizon-544c54c55c-96gjr" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.848147 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1af33f06-f5a1-4478-9632-15433df32786-scripts\") pod \"horizon-544c54c55c-96gjr\" (UID: \"1af33f06-f5a1-4478-9632-15433df32786\") " pod="openstack/horizon-544c54c55c-96gjr" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.848165 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1af33f06-f5a1-4478-9632-15433df32786-logs\") pod \"horizon-544c54c55c-96gjr\" (UID: \"1af33f06-f5a1-4478-9632-15433df32786\") " pod="openstack/horizon-544c54c55c-96gjr" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.848194 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9knzj\" (UniqueName: \"kubernetes.io/projected/4a9517bd-8744-42d5-b058-6376f9294bfc-kube-api-access-9knzj\") pod \"watcher-applier-0\" (UID: \"4a9517bd-8744-42d5-b058-6376f9294bfc\") " pod="openstack/watcher-applier-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.848220 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/539ac286-6fae-4923-b100-f1cd8946c2c2-logs\") pod \"watcher-decision-engine-0\" (UID: \"539ac286-6fae-4923-b100-f1cd8946c2c2\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.848253 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/539ac286-6fae-4923-b100-f1cd8946c2c2-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"539ac286-6fae-4923-b100-f1cd8946c2c2\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.852656 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db916860-b894-4062-a47e-9ca1b6cd8651-logs\") pod \"watcher-api-0\" (UID: \"db916860-b894-4062-a47e-9ca1b6cd8651\") " pod="openstack/watcher-api-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.866884 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a9517bd-8744-42d5-b058-6376f9294bfc-logs\") pod \"watcher-applier-0\" (UID: \"4a9517bd-8744-42d5-b058-6376f9294bfc\") " pod="openstack/watcher-applier-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.867418 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db916860-b894-4062-a47e-9ca1b6cd8651-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"db916860-b894-4062-a47e-9ca1b6cd8651\") " pod="openstack/watcher-api-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.871748 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/db916860-b894-4062-a47e-9ca1b6cd8651-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"db916860-b894-4062-a47e-9ca1b6cd8651\") " pod="openstack/watcher-api-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.872705 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a9517bd-8744-42d5-b058-6376f9294bfc-config-data\") pod \"watcher-applier-0\" (UID: \"4a9517bd-8744-42d5-b058-6376f9294bfc\") " pod="openstack/watcher-applier-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.877726 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db916860-b894-4062-a47e-9ca1b6cd8651-config-data\") pod \"watcher-api-0\" (UID: \"db916860-b894-4062-a47e-9ca1b6cd8651\") " pod="openstack/watcher-api-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.892522 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9517bd-8744-42d5-b058-6376f9294bfc-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"4a9517bd-8744-42d5-b058-6376f9294bfc\") " pod="openstack/watcher-applier-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.915475 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-8zkpf"] Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.917835 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8zkpf" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.927090 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9knzj\" (UniqueName: \"kubernetes.io/projected/4a9517bd-8744-42d5-b058-6376f9294bfc-kube-api-access-9knzj\") pod \"watcher-applier-0\" (UID: \"4a9517bd-8744-42d5-b058-6376f9294bfc\") " pod="openstack/watcher-applier-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.932793 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-n7lsn" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.934705 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlblm\" (UniqueName: \"kubernetes.io/projected/db916860-b894-4062-a47e-9ca1b6cd8651-kube-api-access-tlblm\") pod \"watcher-api-0\" (UID: \"db916860-b894-4062-a47e-9ca1b6cd8651\") " pod="openstack/watcher-api-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.935098 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.960351 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqhcp\" (UniqueName: \"kubernetes.io/projected/539ac286-6fae-4923-b100-f1cd8946c2c2-kube-api-access-hqhcp\") pod \"watcher-decision-engine-0\" (UID: \"539ac286-6fae-4923-b100-f1cd8946c2c2\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.960434 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w57fj\" (UniqueName: \"kubernetes.io/projected/1af33f06-f5a1-4478-9632-15433df32786-kube-api-access-w57fj\") pod \"horizon-544c54c55c-96gjr\" (UID: \"1af33f06-f5a1-4478-9632-15433df32786\") " pod="openstack/horizon-544c54c55c-96gjr" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.960465 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/216f1de0-6e22-49b7-88aa-256d0d67b014-config\") pod \"neutron-db-sync-wg2jr\" (UID: \"216f1de0-6e22-49b7-88aa-256d0d67b014\") " pod="openstack/neutron-db-sync-wg2jr" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.960497 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1af33f06-f5a1-4478-9632-15433df32786-horizon-secret-key\") pod \"horizon-544c54c55c-96gjr\" (UID: \"1af33f06-f5a1-4478-9632-15433df32786\") " pod="openstack/horizon-544c54c55c-96gjr" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.960535 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/539ac286-6fae-4923-b100-f1cd8946c2c2-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"539ac286-6fae-4923-b100-f1cd8946c2c2\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.960558 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1af33f06-f5a1-4478-9632-15433df32786-config-data\") pod \"horizon-544c54c55c-96gjr\" (UID: \"1af33f06-f5a1-4478-9632-15433df32786\") " pod="openstack/horizon-544c54c55c-96gjr" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.960580 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1af33f06-f5a1-4478-9632-15433df32786-scripts\") pod \"horizon-544c54c55c-96gjr\" (UID: \"1af33f06-f5a1-4478-9632-15433df32786\") " pod="openstack/horizon-544c54c55c-96gjr" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.960598 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1af33f06-f5a1-4478-9632-15433df32786-logs\") pod \"horizon-544c54c55c-96gjr\" (UID: \"1af33f06-f5a1-4478-9632-15433df32786\") " pod="openstack/horizon-544c54c55c-96gjr" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.960618 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/539ac286-6fae-4923-b100-f1cd8946c2c2-logs\") pod \"watcher-decision-engine-0\" (UID: \"539ac286-6fae-4923-b100-f1cd8946c2c2\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.960643 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/539ac286-6fae-4923-b100-f1cd8946c2c2-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"539ac286-6fae-4923-b100-f1cd8946c2c2\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.960672 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/539ac286-6fae-4923-b100-f1cd8946c2c2-config-data\") pod \"watcher-decision-engine-0\" (UID: \"539ac286-6fae-4923-b100-f1cd8946c2c2\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.960755 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216f1de0-6e22-49b7-88aa-256d0d67b014-combined-ca-bundle\") pod \"neutron-db-sync-wg2jr\" (UID: \"216f1de0-6e22-49b7-88aa-256d0d67b014\") " pod="openstack/neutron-db-sync-wg2jr" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.960782 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhq89\" (UniqueName: \"kubernetes.io/projected/216f1de0-6e22-49b7-88aa-256d0d67b014-kube-api-access-hhq89\") pod \"neutron-db-sync-wg2jr\" (UID: \"216f1de0-6e22-49b7-88aa-256d0d67b014\") " pod="openstack/neutron-db-sync-wg2jr" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.961834 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1af33f06-f5a1-4478-9632-15433df32786-scripts\") pod \"horizon-544c54c55c-96gjr\" (UID: \"1af33f06-f5a1-4478-9632-15433df32786\") " pod="openstack/horizon-544c54c55c-96gjr" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.972379 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.987434 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1af33f06-f5a1-4478-9632-15433df32786-horizon-secret-key\") pod \"horizon-544c54c55c-96gjr\" (UID: \"1af33f06-f5a1-4478-9632-15433df32786\") " pod="openstack/horizon-544c54c55c-96gjr" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.987674 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/539ac286-6fae-4923-b100-f1cd8946c2c2-logs\") pod \"watcher-decision-engine-0\" (UID: \"539ac286-6fae-4923-b100-f1cd8946c2c2\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.988010 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1af33f06-f5a1-4478-9632-15433df32786-logs\") pod \"horizon-544c54c55c-96gjr\" (UID: \"1af33f06-f5a1-4478-9632-15433df32786\") " pod="openstack/horizon-544c54c55c-96gjr" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.989040 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1af33f06-f5a1-4478-9632-15433df32786-config-data\") pod \"horizon-544c54c55c-96gjr\" (UID: \"1af33f06-f5a1-4478-9632-15433df32786\") " pod="openstack/horizon-544c54c55c-96gjr" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.993338 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/539ac286-6fae-4923-b100-f1cd8946c2c2-config-data\") pod \"watcher-decision-engine-0\" (UID: \"539ac286-6fae-4923-b100-f1cd8946c2c2\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.993744 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/539ac286-6fae-4923-b100-f1cd8946c2c2-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"539ac286-6fae-4923-b100-f1cd8946c2c2\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:01 crc kubenswrapper[4904]: I0223 10:25:01.996833 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w57fj\" (UniqueName: \"kubernetes.io/projected/1af33f06-f5a1-4478-9632-15433df32786-kube-api-access-w57fj\") pod \"horizon-544c54c55c-96gjr\" (UID: \"1af33f06-f5a1-4478-9632-15433df32786\") " pod="openstack/horizon-544c54c55c-96gjr" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.018935 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.019050 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/539ac286-6fae-4923-b100-f1cd8946c2c2-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"539ac286-6fae-4923-b100-f1cd8946c2c2\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.019517 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-8zkpf"] Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.022287 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqhcp\" (UniqueName: \"kubernetes.io/projected/539ac286-6fae-4923-b100-f1cd8946c2c2-kube-api-access-hqhcp\") pod \"watcher-decision-engine-0\" (UID: \"539ac286-6fae-4923-b100-f1cd8946c2c2\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.064216 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216f1de0-6e22-49b7-88aa-256d0d67b014-combined-ca-bundle\") pod \"neutron-db-sync-wg2jr\" (UID: \"216f1de0-6e22-49b7-88aa-256d0d67b014\") " pod="openstack/neutron-db-sync-wg2jr" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.064303 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhq89\" (UniqueName: \"kubernetes.io/projected/216f1de0-6e22-49b7-88aa-256d0d67b014-kube-api-access-hhq89\") pod \"neutron-db-sync-wg2jr\" (UID: \"216f1de0-6e22-49b7-88aa-256d0d67b014\") " pod="openstack/neutron-db-sync-wg2jr" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.064344 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25955027-6da1-4cce-8074-f079cf65f840-combined-ca-bundle\") pod \"cinder-db-sync-8zkpf\" (UID: \"25955027-6da1-4cce-8074-f079cf65f840\") " pod="openstack/cinder-db-sync-8zkpf" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.064411 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25955027-6da1-4cce-8074-f079cf65f840-config-data\") pod \"cinder-db-sync-8zkpf\" (UID: \"25955027-6da1-4cce-8074-f079cf65f840\") " pod="openstack/cinder-db-sync-8zkpf" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.064452 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/216f1de0-6e22-49b7-88aa-256d0d67b014-config\") pod \"neutron-db-sync-wg2jr\" (UID: \"216f1de0-6e22-49b7-88aa-256d0d67b014\") " pod="openstack/neutron-db-sync-wg2jr" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.064491 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25955027-6da1-4cce-8074-f079cf65f840-scripts\") pod \"cinder-db-sync-8zkpf\" (UID: \"25955027-6da1-4cce-8074-f079cf65f840\") " pod="openstack/cinder-db-sync-8zkpf" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.064515 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25955027-6da1-4cce-8074-f079cf65f840-etc-machine-id\") pod \"cinder-db-sync-8zkpf\" (UID: \"25955027-6da1-4cce-8074-f079cf65f840\") " pod="openstack/cinder-db-sync-8zkpf" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.064539 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25955027-6da1-4cce-8074-f079cf65f840-db-sync-config-data\") pod \"cinder-db-sync-8zkpf\" (UID: \"25955027-6da1-4cce-8074-f079cf65f840\") " pod="openstack/cinder-db-sync-8zkpf" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.064597 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqch9\" (UniqueName: \"kubernetes.io/projected/25955027-6da1-4cce-8074-f079cf65f840-kube-api-access-lqch9\") pod \"cinder-db-sync-8zkpf\" (UID: \"25955027-6da1-4cce-8074-f079cf65f840\") " pod="openstack/cinder-db-sync-8zkpf" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.077488 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/216f1de0-6e22-49b7-88aa-256d0d67b014-config\") pod \"neutron-db-sync-wg2jr\" (UID: \"216f1de0-6e22-49b7-88aa-256d0d67b014\") " pod="openstack/neutron-db-sync-wg2jr" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.079437 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.086557 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216f1de0-6e22-49b7-88aa-256d0d67b014-combined-ca-bundle\") pod \"neutron-db-sync-wg2jr\" (UID: \"216f1de0-6e22-49b7-88aa-256d0d67b014\") " pod="openstack/neutron-db-sync-wg2jr" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.131815 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.134577 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.138623 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.140062 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhq89\" (UniqueName: \"kubernetes.io/projected/216f1de0-6e22-49b7-88aa-256d0d67b014-kube-api-access-hhq89\") pod \"neutron-db-sync-wg2jr\" (UID: \"216f1de0-6e22-49b7-88aa-256d0d67b014\") " pod="openstack/neutron-db-sync-wg2jr" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.147245 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.169668 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25955027-6da1-4cce-8074-f079cf65f840-combined-ca-bundle\") pod \"cinder-db-sync-8zkpf\" (UID: \"25955027-6da1-4cce-8074-f079cf65f840\") " pod="openstack/cinder-db-sync-8zkpf" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.169760 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25955027-6da1-4cce-8074-f079cf65f840-config-data\") pod \"cinder-db-sync-8zkpf\" (UID: \"25955027-6da1-4cce-8074-f079cf65f840\") " pod="openstack/cinder-db-sync-8zkpf" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.169804 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25955027-6da1-4cce-8074-f079cf65f840-scripts\") pod \"cinder-db-sync-8zkpf\" (UID: \"25955027-6da1-4cce-8074-f079cf65f840\") " pod="openstack/cinder-db-sync-8zkpf" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.169825 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25955027-6da1-4cce-8074-f079cf65f840-etc-machine-id\") pod \"cinder-db-sync-8zkpf\" (UID: \"25955027-6da1-4cce-8074-f079cf65f840\") " pod="openstack/cinder-db-sync-8zkpf" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.169845 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25955027-6da1-4cce-8074-f079cf65f840-db-sync-config-data\") pod \"cinder-db-sync-8zkpf\" (UID: \"25955027-6da1-4cce-8074-f079cf65f840\") " pod="openstack/cinder-db-sync-8zkpf" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.169883 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqch9\" (UniqueName: \"kubernetes.io/projected/25955027-6da1-4cce-8074-f079cf65f840-kube-api-access-lqch9\") pod \"cinder-db-sync-8zkpf\" (UID: \"25955027-6da1-4cce-8074-f079cf65f840\") " pod="openstack/cinder-db-sync-8zkpf" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.175415 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25955027-6da1-4cce-8074-f079cf65f840-etc-machine-id\") pod \"cinder-db-sync-8zkpf\" (UID: \"25955027-6da1-4cce-8074-f079cf65f840\") " pod="openstack/cinder-db-sync-8zkpf" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.190146 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25955027-6da1-4cce-8074-f079cf65f840-config-data\") pod \"cinder-db-sync-8zkpf\" (UID: \"25955027-6da1-4cce-8074-f079cf65f840\") " pod="openstack/cinder-db-sync-8zkpf" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.190867 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25955027-6da1-4cce-8074-f079cf65f840-db-sync-config-data\") pod \"cinder-db-sync-8zkpf\" (UID: \"25955027-6da1-4cce-8074-f079cf65f840\") " pod="openstack/cinder-db-sync-8zkpf" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.196921 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.200365 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqch9\" (UniqueName: \"kubernetes.io/projected/25955027-6da1-4cce-8074-f079cf65f840-kube-api-access-lqch9\") pod \"cinder-db-sync-8zkpf\" (UID: \"25955027-6da1-4cce-8074-f079cf65f840\") " pod="openstack/cinder-db-sync-8zkpf" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.206288 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25955027-6da1-4cce-8074-f079cf65f840-combined-ca-bundle\") pod \"cinder-db-sync-8zkpf\" (UID: \"25955027-6da1-4cce-8074-f079cf65f840\") " pod="openstack/cinder-db-sync-8zkpf" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.206340 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.206412 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25955027-6da1-4cce-8074-f079cf65f840-scripts\") pod \"cinder-db-sync-8zkpf\" (UID: \"25955027-6da1-4cce-8074-f079cf65f840\") " pod="openstack/cinder-db-sync-8zkpf" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.247051 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-544c54c55c-96gjr" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.271917 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-mcbg8"] Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.273496 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mcbg8" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.275320 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wg2jr" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.280301 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.294346 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hw6rs" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.299583 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6wlw\" (UniqueName: \"kubernetes.io/projected/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-kube-api-access-l6wlw\") pod \"ceilometer-0\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " pod="openstack/ceilometer-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.299672 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-run-httpd\") pod \"ceilometer-0\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " pod="openstack/ceilometer-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.299760 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-config-data\") pod \"ceilometer-0\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " pod="openstack/ceilometer-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.299936 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " pod="openstack/ceilometer-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.299986 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-scripts\") pod \"ceilometer-0\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " pod="openstack/ceilometer-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.300014 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " pod="openstack/ceilometer-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.300135 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-log-httpd\") pod \"ceilometer-0\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " pod="openstack/ceilometer-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.303335 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.339821 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8zkpf" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.349808 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-bmswz"] Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.394359 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-mcbg8"] Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.404184 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6wlw\" (UniqueName: \"kubernetes.io/projected/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-kube-api-access-l6wlw\") pod \"ceilometer-0\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " pod="openstack/ceilometer-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.404266 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-run-httpd\") pod \"ceilometer-0\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " pod="openstack/ceilometer-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.404300 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f42ae55-89bb-42a1-900c-9c332e089d96-logs\") pod \"placement-db-sync-mcbg8\" (UID: \"0f42ae55-89bb-42a1-900c-9c332e089d96\") " pod="openstack/placement-db-sync-mcbg8" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.404328 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-config-data\") pod \"ceilometer-0\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " pod="openstack/ceilometer-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.404352 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f42ae55-89bb-42a1-900c-9c332e089d96-combined-ca-bundle\") pod \"placement-db-sync-mcbg8\" (UID: \"0f42ae55-89bb-42a1-900c-9c332e089d96\") " pod="openstack/placement-db-sync-mcbg8" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.404419 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f42ae55-89bb-42a1-900c-9c332e089d96-scripts\") pod \"placement-db-sync-mcbg8\" (UID: \"0f42ae55-89bb-42a1-900c-9c332e089d96\") " pod="openstack/placement-db-sync-mcbg8" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.404476 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " pod="openstack/ceilometer-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.404502 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-scripts\") pod \"ceilometer-0\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " pod="openstack/ceilometer-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.404520 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f42ae55-89bb-42a1-900c-9c332e089d96-config-data\") pod \"placement-db-sync-mcbg8\" (UID: \"0f42ae55-89bb-42a1-900c-9c332e089d96\") " pod="openstack/placement-db-sync-mcbg8" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.404538 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " pod="openstack/ceilometer-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.404573 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdqq2\" (UniqueName: \"kubernetes.io/projected/0f42ae55-89bb-42a1-900c-9c332e089d96-kube-api-access-jdqq2\") pod \"placement-db-sync-mcbg8\" (UID: \"0f42ae55-89bb-42a1-900c-9c332e089d96\") " pod="openstack/placement-db-sync-mcbg8" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.404619 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-log-httpd\") pod \"ceilometer-0\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " pod="openstack/ceilometer-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.410687 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-run-httpd\") pod \"ceilometer-0\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " pod="openstack/ceilometer-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.423355 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-log-httpd\") pod \"ceilometer-0\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " pod="openstack/ceilometer-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.439631 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " pod="openstack/ceilometer-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.441924 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " pod="openstack/ceilometer-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.442910 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-config-data\") pod \"ceilometer-0\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " pod="openstack/ceilometer-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.443560 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-scripts\") pod \"ceilometer-0\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " pod="openstack/ceilometer-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.473450 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6wlw\" (UniqueName: \"kubernetes.io/projected/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-kube-api-access-l6wlw\") pod \"ceilometer-0\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " pod="openstack/ceilometer-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.501807 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.507506 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f42ae55-89bb-42a1-900c-9c332e089d96-logs\") pod \"placement-db-sync-mcbg8\" (UID: \"0f42ae55-89bb-42a1-900c-9c332e089d96\") " pod="openstack/placement-db-sync-mcbg8" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.507574 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f42ae55-89bb-42a1-900c-9c332e089d96-combined-ca-bundle\") pod \"placement-db-sync-mcbg8\" (UID: \"0f42ae55-89bb-42a1-900c-9c332e089d96\") " pod="openstack/placement-db-sync-mcbg8" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.507618 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f42ae55-89bb-42a1-900c-9c332e089d96-scripts\") pod \"placement-db-sync-mcbg8\" (UID: \"0f42ae55-89bb-42a1-900c-9c332e089d96\") " pod="openstack/placement-db-sync-mcbg8" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.507668 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f42ae55-89bb-42a1-900c-9c332e089d96-config-data\") pod \"placement-db-sync-mcbg8\" (UID: \"0f42ae55-89bb-42a1-900c-9c332e089d96\") " pod="openstack/placement-db-sync-mcbg8" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.507767 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdqq2\" (UniqueName: \"kubernetes.io/projected/0f42ae55-89bb-42a1-900c-9c332e089d96-kube-api-access-jdqq2\") pod \"placement-db-sync-mcbg8\" (UID: \"0f42ae55-89bb-42a1-900c-9c332e089d96\") " pod="openstack/placement-db-sync-mcbg8" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.511178 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f42ae55-89bb-42a1-900c-9c332e089d96-logs\") pod \"placement-db-sync-mcbg8\" (UID: \"0f42ae55-89bb-42a1-900c-9c332e089d96\") " pod="openstack/placement-db-sync-mcbg8" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.511578 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-868bc5fc7c-59nrn"] Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.514226 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f42ae55-89bb-42a1-900c-9c332e089d96-combined-ca-bundle\") pod \"placement-db-sync-mcbg8\" (UID: \"0f42ae55-89bb-42a1-900c-9c332e089d96\") " pod="openstack/placement-db-sync-mcbg8" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.514305 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f42ae55-89bb-42a1-900c-9c332e089d96-config-data\") pod \"placement-db-sync-mcbg8\" (UID: \"0f42ae55-89bb-42a1-900c-9c332e089d96\") " pod="openstack/placement-db-sync-mcbg8" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.516227 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f42ae55-89bb-42a1-900c-9c332e089d96-scripts\") pod \"placement-db-sync-mcbg8\" (UID: \"0f42ae55-89bb-42a1-900c-9c332e089d96\") " pod="openstack/placement-db-sync-mcbg8" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.561128 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-868bc5fc7c-59nrn" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.565058 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-wq2q2"] Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.572511 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.602827 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdqq2\" (UniqueName: \"kubernetes.io/projected/0f42ae55-89bb-42a1-900c-9c332e089d96-kube-api-access-jdqq2\") pod \"placement-db-sync-mcbg8\" (UID: \"0f42ae55-89bb-42a1-900c-9c332e089d96\") " pod="openstack/placement-db-sync-mcbg8" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.616410 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74387706-b6cb-403d-9455-a8cb80907893-config-data\") pod \"horizon-868bc5fc7c-59nrn\" (UID: \"74387706-b6cb-403d-9455-a8cb80907893\") " pod="openstack/horizon-868bc5fc7c-59nrn" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.616532 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74387706-b6cb-403d-9455-a8cb80907893-scripts\") pod \"horizon-868bc5fc7c-59nrn\" (UID: \"74387706-b6cb-403d-9455-a8cb80907893\") " pod="openstack/horizon-868bc5fc7c-59nrn" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.616589 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bqml\" (UniqueName: \"kubernetes.io/projected/74387706-b6cb-403d-9455-a8cb80907893-kube-api-access-6bqml\") pod \"horizon-868bc5fc7c-59nrn\" (UID: \"74387706-b6cb-403d-9455-a8cb80907893\") " pod="openstack/horizon-868bc5fc7c-59nrn" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.616652 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74387706-b6cb-403d-9455-a8cb80907893-logs\") pod \"horizon-868bc5fc7c-59nrn\" (UID: \"74387706-b6cb-403d-9455-a8cb80907893\") " pod="openstack/horizon-868bc5fc7c-59nrn" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.616736 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74387706-b6cb-403d-9455-a8cb80907893-horizon-secret-key\") pod \"horizon-868bc5fc7c-59nrn\" (UID: \"74387706-b6cb-403d-9455-a8cb80907893\") " pod="openstack/horizon-868bc5fc7c-59nrn" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.646946 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-868bc5fc7c-59nrn"] Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.687244 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.689337 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.690928 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mcbg8" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.696424 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.700344 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jjmnj" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.700546 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.722399 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bqml\" (UniqueName: \"kubernetes.io/projected/74387706-b6cb-403d-9455-a8cb80907893-kube-api-access-6bqml\") pod \"horizon-868bc5fc7c-59nrn\" (UID: \"74387706-b6cb-403d-9455-a8cb80907893\") " pod="openstack/horizon-868bc5fc7c-59nrn" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.722500 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5257\" (UniqueName: \"kubernetes.io/projected/d84a0369-8741-4751-9659-66afc1b89c8c-kube-api-access-x5257\") pod \"dnsmasq-dns-785d8bcb8c-wq2q2\" (UID: \"d84a0369-8741-4751-9659-66afc1b89c8c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.722551 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74387706-b6cb-403d-9455-a8cb80907893-logs\") pod \"horizon-868bc5fc7c-59nrn\" (UID: \"74387706-b6cb-403d-9455-a8cb80907893\") " pod="openstack/horizon-868bc5fc7c-59nrn" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.722578 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-wq2q2\" (UID: \"d84a0369-8741-4751-9659-66afc1b89c8c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.722621 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-wq2q2\" (UID: \"d84a0369-8741-4751-9659-66afc1b89c8c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.722735 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74387706-b6cb-403d-9455-a8cb80907893-horizon-secret-key\") pod \"horizon-868bc5fc7c-59nrn\" (UID: \"74387706-b6cb-403d-9455-a8cb80907893\") " pod="openstack/horizon-868bc5fc7c-59nrn" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.722768 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-config\") pod \"dnsmasq-dns-785d8bcb8c-wq2q2\" (UID: \"d84a0369-8741-4751-9659-66afc1b89c8c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.722815 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-wq2q2\" (UID: \"d84a0369-8741-4751-9659-66afc1b89c8c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.722893 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74387706-b6cb-403d-9455-a8cb80907893-config-data\") pod \"horizon-868bc5fc7c-59nrn\" (UID: \"74387706-b6cb-403d-9455-a8cb80907893\") " pod="openstack/horizon-868bc5fc7c-59nrn" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.722970 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-wq2q2\" (UID: \"d84a0369-8741-4751-9659-66afc1b89c8c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.722993 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74387706-b6cb-403d-9455-a8cb80907893-scripts\") pod \"horizon-868bc5fc7c-59nrn\" (UID: \"74387706-b6cb-403d-9455-a8cb80907893\") " pod="openstack/horizon-868bc5fc7c-59nrn" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.724286 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.725340 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74387706-b6cb-403d-9455-a8cb80907893-logs\") pod \"horizon-868bc5fc7c-59nrn\" (UID: \"74387706-b6cb-403d-9455-a8cb80907893\") " pod="openstack/horizon-868bc5fc7c-59nrn" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.728412 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74387706-b6cb-403d-9455-a8cb80907893-scripts\") pod \"horizon-868bc5fc7c-59nrn\" (UID: \"74387706-b6cb-403d-9455-a8cb80907893\") " pod="openstack/horizon-868bc5fc7c-59nrn" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.729978 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-wq2q2"] Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.731864 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74387706-b6cb-403d-9455-a8cb80907893-horizon-secret-key\") pod \"horizon-868bc5fc7c-59nrn\" (UID: \"74387706-b6cb-403d-9455-a8cb80907893\") " pod="openstack/horizon-868bc5fc7c-59nrn" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.736551 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74387706-b6cb-403d-9455-a8cb80907893-config-data\") pod \"horizon-868bc5fc7c-59nrn\" (UID: \"74387706-b6cb-403d-9455-a8cb80907893\") " pod="openstack/horizon-868bc5fc7c-59nrn" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.757790 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bqml\" (UniqueName: \"kubernetes.io/projected/74387706-b6cb-403d-9455-a8cb80907893-kube-api-access-6bqml\") pod \"horizon-868bc5fc7c-59nrn\" (UID: \"74387706-b6cb-403d-9455-a8cb80907893\") " pod="openstack/horizon-868bc5fc7c-59nrn" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.770203 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.833140 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-wq2q2\" (UID: \"d84a0369-8741-4751-9659-66afc1b89c8c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.833302 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5257\" (UniqueName: \"kubernetes.io/projected/d84a0369-8741-4751-9659-66afc1b89c8c-kube-api-access-x5257\") pod \"dnsmasq-dns-785d8bcb8c-wq2q2\" (UID: \"d84a0369-8741-4751-9659-66afc1b89c8c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.833340 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.833378 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-wq2q2\" (UID: \"d84a0369-8741-4751-9659-66afc1b89c8c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.833394 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-wq2q2\" (UID: \"d84a0369-8741-4751-9659-66afc1b89c8c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.833419 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-logs\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.833488 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.833513 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-config-data\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.833534 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-config\") pod \"dnsmasq-dns-785d8bcb8c-wq2q2\" (UID: \"d84a0369-8741-4751-9659-66afc1b89c8c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.833557 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.834534 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-wq2q2\" (UID: \"d84a0369-8741-4751-9659-66afc1b89c8c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.835161 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-wq2q2\" (UID: \"d84a0369-8741-4751-9659-66afc1b89c8c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.835293 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.835334 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-scripts\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.835432 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llzzs\" (UniqueName: \"kubernetes.io/projected/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-kube-api-access-llzzs\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.835533 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-config\") pod \"dnsmasq-dns-785d8bcb8c-wq2q2\" (UID: \"d84a0369-8741-4751-9659-66afc1b89c8c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.836994 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-wq2q2\" (UID: \"d84a0369-8741-4751-9659-66afc1b89c8c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.838013 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-wq2q2\" (UID: \"d84a0369-8741-4751-9659-66afc1b89c8c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.838307 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-wq2q2\" (UID: \"d84a0369-8741-4751-9659-66afc1b89c8c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.839961 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-764vx"] Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.841868 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-764vx" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.859169 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.859861 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5hmkf" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.871708 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5257\" (UniqueName: \"kubernetes.io/projected/d84a0369-8741-4751-9659-66afc1b89c8c-kube-api-access-x5257\") pod \"dnsmasq-dns-785d8bcb8c-wq2q2\" (UID: \"d84a0369-8741-4751-9659-66afc1b89c8c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.874474 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-764vx"] Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.938758 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89d0344-56a3-4c17-b647-5d69fc060406-combined-ca-bundle\") pod \"barbican-db-sync-764vx\" (UID: \"e89d0344-56a3-4c17-b647-5d69fc060406\") " pod="openstack/barbican-db-sync-764vx" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.939297 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cn9k\" (UniqueName: \"kubernetes.io/projected/e89d0344-56a3-4c17-b647-5d69fc060406-kube-api-access-6cn9k\") pod \"barbican-db-sync-764vx\" (UID: \"e89d0344-56a3-4c17-b647-5d69fc060406\") " pod="openstack/barbican-db-sync-764vx" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.939395 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.939454 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e89d0344-56a3-4c17-b647-5d69fc060406-db-sync-config-data\") pod \"barbican-db-sync-764vx\" (UID: \"e89d0344-56a3-4c17-b647-5d69fc060406\") " pod="openstack/barbican-db-sync-764vx" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.939480 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-logs\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.939543 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.939571 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-config-data\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.939627 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.939662 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.939689 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-scripts\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.939798 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llzzs\" (UniqueName: \"kubernetes.io/projected/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-kube-api-access-llzzs\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.942103 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-logs\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.943935 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.944344 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.968685 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.971747 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.972125 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.972295 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-config-data\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.974694 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.981087 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.981457 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.986731 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.996627 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llzzs\" (UniqueName: \"kubernetes.io/projected/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-kube-api-access-llzzs\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:02 crc kubenswrapper[4904]: I0223 10:25:02.996789 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-scripts\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.004301 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.027911 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-m47lx"] Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.041587 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0276b376-ee0e-45b3-bffc-feedf18e03a8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.041648 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e89d0344-56a3-4c17-b647-5d69fc060406-db-sync-config-data\") pod \"barbican-db-sync-764vx\" (UID: \"e89d0344-56a3-4c17-b647-5d69fc060406\") " pod="openstack/barbican-db-sync-764vx" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.041667 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0276b376-ee0e-45b3-bffc-feedf18e03a8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.041756 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0276b376-ee0e-45b3-bffc-feedf18e03a8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.041786 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0276b376-ee0e-45b3-bffc-feedf18e03a8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.041805 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr9mx\" (UniqueName: \"kubernetes.io/projected/0276b376-ee0e-45b3-bffc-feedf18e03a8-kube-api-access-rr9mx\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.041835 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.041854 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0276b376-ee0e-45b3-bffc-feedf18e03a8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.041900 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0276b376-ee0e-45b3-bffc-feedf18e03a8-logs\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.041919 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89d0344-56a3-4c17-b647-5d69fc060406-combined-ca-bundle\") pod \"barbican-db-sync-764vx\" (UID: \"e89d0344-56a3-4c17-b647-5d69fc060406\") " pod="openstack/barbican-db-sync-764vx" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.041939 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cn9k\" (UniqueName: \"kubernetes.io/projected/e89d0344-56a3-4c17-b647-5d69fc060406-kube-api-access-6cn9k\") pod \"barbican-db-sync-764vx\" (UID: \"e89d0344-56a3-4c17-b647-5d69fc060406\") " pod="openstack/barbican-db-sync-764vx" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.055162 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e89d0344-56a3-4c17-b647-5d69fc060406-db-sync-config-data\") pod \"barbican-db-sync-764vx\" (UID: \"e89d0344-56a3-4c17-b647-5d69fc060406\") " pod="openstack/barbican-db-sync-764vx" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.066319 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89d0344-56a3-4c17-b647-5d69fc060406-combined-ca-bundle\") pod \"barbican-db-sync-764vx\" (UID: \"e89d0344-56a3-4c17-b647-5d69fc060406\") " pod="openstack/barbican-db-sync-764vx" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.088025 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cn9k\" (UniqueName: \"kubernetes.io/projected/e89d0344-56a3-4c17-b647-5d69fc060406-kube-api-access-6cn9k\") pod \"barbican-db-sync-764vx\" (UID: \"e89d0344-56a3-4c17-b647-5d69fc060406\") " pod="openstack/barbican-db-sync-764vx" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.143933 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0276b376-ee0e-45b3-bffc-feedf18e03a8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.144395 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0276b376-ee0e-45b3-bffc-feedf18e03a8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.144493 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0276b376-ee0e-45b3-bffc-feedf18e03a8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.144530 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0276b376-ee0e-45b3-bffc-feedf18e03a8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.144559 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr9mx\" (UniqueName: \"kubernetes.io/projected/0276b376-ee0e-45b3-bffc-feedf18e03a8-kube-api-access-rr9mx\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.144600 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.144634 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0276b376-ee0e-45b3-bffc-feedf18e03a8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.144707 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0276b376-ee0e-45b3-bffc-feedf18e03a8-logs\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.146350 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.155159 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-868bc5fc7c-59nrn" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.171188 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0276b376-ee0e-45b3-bffc-feedf18e03a8-logs\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.171518 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0276b376-ee0e-45b3-bffc-feedf18e03a8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.191134 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0276b376-ee0e-45b3-bffc-feedf18e03a8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.192978 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0276b376-ee0e-45b3-bffc-feedf18e03a8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.201128 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.204091 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr9mx\" (UniqueName: \"kubernetes.io/projected/0276b376-ee0e-45b3-bffc-feedf18e03a8-kube-api-access-rr9mx\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.221618 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.221849 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0276b376-ee0e-45b3-bffc-feedf18e03a8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.235136 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0276b376-ee0e-45b3-bffc-feedf18e03a8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.241882 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.297950 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-764vx" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.369584 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 10:25:03 crc kubenswrapper[4904]: I0223 10:25:03.379333 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-bmswz"] Feb 23 10:25:03 crc kubenswrapper[4904]: W0223 10:25:03.462382 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode54fe1fc_8fd7_4077_99e3_a4249ae46c70.slice/crio-090e6eac9172836be5a3ae813a363826a4bad6fad10399e2e04eb2d1a84d18ff WatchSource:0}: Error finding container 090e6eac9172836be5a3ae813a363826a4bad6fad10399e2e04eb2d1a84d18ff: Status 404 returned error can't find the container with id 090e6eac9172836be5a3ae813a363826a4bad6fad10399e2e04eb2d1a84d18ff Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.022451 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.093488 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-544c54c55c-96gjr"] Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.103981 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"4a9517bd-8744-42d5-b058-6376f9294bfc","Type":"ContainerStarted","Data":"0169deeeedb5845d029d701537cd6d907c911b3cee169fa235fcbd052e959ed8"} Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.113651 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m47lx" event={"ID":"6cf70849-f56a-47a0-a26d-a3840f9d314b","Type":"ContainerStarted","Data":"09302921b0f0aa31e810cb906e305c5c7bbc6134c1875b812be5ce7c2ebbba99"} Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.113727 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m47lx" event={"ID":"6cf70849-f56a-47a0-a26d-a3840f9d314b","Type":"ContainerStarted","Data":"ed1cbebc20a4d6cd2b8f26145f417f042f3afd0cf20a5a80aa2138ddb89e4001"} Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.116131 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-544c54c55c-96gjr" event={"ID":"1af33f06-f5a1-4478-9632-15433df32786","Type":"ContainerStarted","Data":"417955ead4c74036477603ff4e1dd68e79c97a5dcb96c86176e1ba7a7b8fc00f"} Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.118053 4904 generic.go:334] "Generic (PLEG): container finished" podID="e54fe1fc-8fd7-4077-99e3-a4249ae46c70" containerID="7e3993637f5ae05c0270b05b97aba2108fa7ffca26ad55348bd3f98d0c385097" exitCode=0 Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.118095 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-bmswz" event={"ID":"e54fe1fc-8fd7-4077-99e3-a4249ae46c70","Type":"ContainerDied","Data":"7e3993637f5ae05c0270b05b97aba2108fa7ffca26ad55348bd3f98d0c385097"} Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.118130 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-bmswz" event={"ID":"e54fe1fc-8fd7-4077-99e3-a4249ae46c70","Type":"ContainerStarted","Data":"090e6eac9172836be5a3ae813a363826a4bad6fad10399e2e04eb2d1a84d18ff"} Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.119384 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.196996 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-m47lx" podStartSLOduration=3.196970886 podStartE2EDuration="3.196970886s" podCreationTimestamp="2026-02-23 10:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:25:04.153703526 +0000 UTC m=+1137.574077049" watchObservedRunningTime="2026-02-23 10:25:04.196970886 +0000 UTC m=+1137.617344399" Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.339937 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.441049 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-8zkpf"] Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.469807 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-mcbg8"] Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.484423 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.501422 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 23 10:25:04 crc kubenswrapper[4904]: W0223 10:25:04.511762 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f42ae55_89bb_42a1_900c_9c332e089d96.slice/crio-4a1e36736990b7b561ec6f9f6d09bdb12e69cf4aaf922b76475b262162258e08 WatchSource:0}: Error finding container 4a1e36736990b7b561ec6f9f6d09bdb12e69cf4aaf922b76475b262162258e08: Status 404 returned error can't find the container with id 4a1e36736990b7b561ec6f9f6d09bdb12e69cf4aaf922b76475b262162258e08 Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.522448 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-544c54c55c-96gjr"] Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.554551 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-wg2jr"] Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.583142 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.600867 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.665302 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-544b9cc98f-nzzsf"] Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.669744 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-544b9cc98f-nzzsf" Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.684863 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-544b9cc98f-nzzsf"] Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.691656 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.859436 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51427d17-4627-402f-a41d-987ab62579a5-scripts\") pod \"horizon-544b9cc98f-nzzsf\" (UID: \"51427d17-4627-402f-a41d-987ab62579a5\") " pod="openstack/horizon-544b9cc98f-nzzsf" Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.859922 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/51427d17-4627-402f-a41d-987ab62579a5-horizon-secret-key\") pod \"horizon-544b9cc98f-nzzsf\" (UID: \"51427d17-4627-402f-a41d-987ab62579a5\") " pod="openstack/horizon-544b9cc98f-nzzsf" Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.860053 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/51427d17-4627-402f-a41d-987ab62579a5-config-data\") pod \"horizon-544b9cc98f-nzzsf\" (UID: \"51427d17-4627-402f-a41d-987ab62579a5\") " pod="openstack/horizon-544b9cc98f-nzzsf" Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.860229 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51427d17-4627-402f-a41d-987ab62579a5-logs\") pod \"horizon-544b9cc98f-nzzsf\" (UID: \"51427d17-4627-402f-a41d-987ab62579a5\") " pod="openstack/horizon-544b9cc98f-nzzsf" Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.860354 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96t8c\" (UniqueName: \"kubernetes.io/projected/51427d17-4627-402f-a41d-987ab62579a5-kube-api-access-96t8c\") pod \"horizon-544b9cc98f-nzzsf\" (UID: \"51427d17-4627-402f-a41d-987ab62579a5\") " pod="openstack/horizon-544b9cc98f-nzzsf" Feb 23 10:25:04 crc kubenswrapper[4904]: W0223 10:25:04.894730 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd84a0369_8741_4751_9659_66afc1b89c8c.slice/crio-e6583d21ad5dadeb8f011735eade025ab8f421090a7f4b4a50e38b1ed0ff030e WatchSource:0}: Error finding container e6583d21ad5dadeb8f011735eade025ab8f421090a7f4b4a50e38b1ed0ff030e: Status 404 returned error can't find the container with id e6583d21ad5dadeb8f011735eade025ab8f421090a7f4b4a50e38b1ed0ff030e Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.895075 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-868bc5fc7c-59nrn"] Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.908948 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-wq2q2"] Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.924537 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-bmswz" Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.974036 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-764vx"] Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.994592 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-dns-svc\") pod \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.994784 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-ovsdbserver-sb\") pod \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.994859 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-config\") pod \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.995375 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7sdz\" (UniqueName: \"kubernetes.io/projected/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-kube-api-access-q7sdz\") pod \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.995556 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-dns-swift-storage-0\") pod \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.995587 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-ovsdbserver-nb\") pod \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.996584 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51427d17-4627-402f-a41d-987ab62579a5-scripts\") pod \"horizon-544b9cc98f-nzzsf\" (UID: \"51427d17-4627-402f-a41d-987ab62579a5\") " pod="openstack/horizon-544b9cc98f-nzzsf" Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.996684 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/51427d17-4627-402f-a41d-987ab62579a5-horizon-secret-key\") pod \"horizon-544b9cc98f-nzzsf\" (UID: \"51427d17-4627-402f-a41d-987ab62579a5\") " pod="openstack/horizon-544b9cc98f-nzzsf" Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.996766 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/51427d17-4627-402f-a41d-987ab62579a5-config-data\") pod \"horizon-544b9cc98f-nzzsf\" (UID: \"51427d17-4627-402f-a41d-987ab62579a5\") " pod="openstack/horizon-544b9cc98f-nzzsf" Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.997099 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51427d17-4627-402f-a41d-987ab62579a5-logs\") pod \"horizon-544b9cc98f-nzzsf\" (UID: \"51427d17-4627-402f-a41d-987ab62579a5\") " pod="openstack/horizon-544b9cc98f-nzzsf" Feb 23 10:25:04 crc kubenswrapper[4904]: I0223 10:25:04.997182 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96t8c\" (UniqueName: \"kubernetes.io/projected/51427d17-4627-402f-a41d-987ab62579a5-kube-api-access-96t8c\") pod \"horizon-544b9cc98f-nzzsf\" (UID: \"51427d17-4627-402f-a41d-987ab62579a5\") " pod="openstack/horizon-544b9cc98f-nzzsf" Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.019586 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51427d17-4627-402f-a41d-987ab62579a5-logs\") pod \"horizon-544b9cc98f-nzzsf\" (UID: \"51427d17-4627-402f-a41d-987ab62579a5\") " pod="openstack/horizon-544b9cc98f-nzzsf" Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.019924 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51427d17-4627-402f-a41d-987ab62579a5-scripts\") pod \"horizon-544b9cc98f-nzzsf\" (UID: \"51427d17-4627-402f-a41d-987ab62579a5\") " pod="openstack/horizon-544b9cc98f-nzzsf" Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.020581 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/51427d17-4627-402f-a41d-987ab62579a5-config-data\") pod \"horizon-544b9cc98f-nzzsf\" (UID: \"51427d17-4627-402f-a41d-987ab62579a5\") " pod="openstack/horizon-544b9cc98f-nzzsf" Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.022516 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-kube-api-access-q7sdz" (OuterVolumeSpecName: "kube-api-access-q7sdz") pod "e54fe1fc-8fd7-4077-99e3-a4249ae46c70" (UID: "e54fe1fc-8fd7-4077-99e3-a4249ae46c70"). InnerVolumeSpecName "kube-api-access-q7sdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.044643 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/51427d17-4627-402f-a41d-987ab62579a5-horizon-secret-key\") pod \"horizon-544b9cc98f-nzzsf\" (UID: \"51427d17-4627-402f-a41d-987ab62579a5\") " pod="openstack/horizon-544b9cc98f-nzzsf" Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.058030 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.073925 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96t8c\" (UniqueName: \"kubernetes.io/projected/51427d17-4627-402f-a41d-987ab62579a5-kube-api-access-96t8c\") pod \"horizon-544b9cc98f-nzzsf\" (UID: \"51427d17-4627-402f-a41d-987ab62579a5\") " pod="openstack/horizon-544b9cc98f-nzzsf" Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.094999 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-config" (OuterVolumeSpecName: "config") pod "e54fe1fc-8fd7-4077-99e3-a4249ae46c70" (UID: "e54fe1fc-8fd7-4077-99e3-a4249ae46c70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.098642 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e54fe1fc-8fd7-4077-99e3-a4249ae46c70" (UID: "e54fe1fc-8fd7-4077-99e3-a4249ae46c70"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.099055 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-dns-svc\") pod \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\" (UID: \"e54fe1fc-8fd7-4077-99e3-a4249ae46c70\") " Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.099670 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.099690 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7sdz\" (UniqueName: \"kubernetes.io/projected/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-kube-api-access-q7sdz\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:05 crc kubenswrapper[4904]: W0223 10:25:05.099808 4904 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e54fe1fc-8fd7-4077-99e3-a4249ae46c70/volumes/kubernetes.io~configmap/dns-svc Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.099824 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e54fe1fc-8fd7-4077-99e3-a4249ae46c70" (UID: "e54fe1fc-8fd7-4077-99e3-a4249ae46c70"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.102503 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e54fe1fc-8fd7-4077-99e3-a4249ae46c70" (UID: "e54fe1fc-8fd7-4077-99e3-a4249ae46c70"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.106758 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e54fe1fc-8fd7-4077-99e3-a4249ae46c70" (UID: "e54fe1fc-8fd7-4077-99e3-a4249ae46c70"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.126988 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e54fe1fc-8fd7-4077-99e3-a4249ae46c70" (UID: "e54fe1fc-8fd7-4077-99e3-a4249ae46c70"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.145097 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad","Type":"ContainerStarted","Data":"7b315b7fa289f8ff09ef62d4f329d493d7fd293bfc23529757e5e0663330fed2"} Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.148522 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8zkpf" event={"ID":"25955027-6da1-4cce-8074-f079cf65f840","Type":"ContainerStarted","Data":"a5151b22ffc669ecbb4d7f0f4d06384f13c121190d126d1be417e5ad0b8ae6d0"} Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.155810 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-544b9cc98f-nzzsf" Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.156751 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" event={"ID":"d84a0369-8741-4751-9659-66afc1b89c8c","Type":"ContainerStarted","Data":"e6583d21ad5dadeb8f011735eade025ab8f421090a7f4b4a50e38b1ed0ff030e"} Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.162595 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-bmswz" event={"ID":"e54fe1fc-8fd7-4077-99e3-a4249ae46c70","Type":"ContainerDied","Data":"090e6eac9172836be5a3ae813a363826a4bad6fad10399e2e04eb2d1a84d18ff"} Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.162690 4904 scope.go:117] "RemoveContainer" containerID="7e3993637f5ae05c0270b05b97aba2108fa7ffca26ad55348bd3f98d0c385097" Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.162918 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-bmswz" Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.166861 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mcbg8" event={"ID":"0f42ae55-89bb-42a1-900c-9c332e089d96","Type":"ContainerStarted","Data":"4a1e36736990b7b561ec6f9f6d09bdb12e69cf4aaf922b76475b262162258e08"} Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.183705 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b","Type":"ContainerStarted","Data":"05d34705fd0e5ccbbf544f94e8f251ab6a93a1257b43ba5b579fb5de1df95c9d"} Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.194656 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wg2jr" event={"ID":"216f1de0-6e22-49b7-88aa-256d0d67b014","Type":"ContainerStarted","Data":"9009c025b11667883e7fd839992a619d8ab864959290a81af83392fedbdcf9a6"} Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.201790 4904 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.201841 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.201857 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.201870 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e54fe1fc-8fd7-4077-99e3-a4249ae46c70-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.215012 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"db916860-b894-4062-a47e-9ca1b6cd8651","Type":"ContainerStarted","Data":"3d5be2e81f07031d14d86e47e11df125a1188c501740c7c9338e44c7bb691c35"} Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.215102 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"db916860-b894-4062-a47e-9ca1b6cd8651","Type":"ContainerStarted","Data":"16261f56c3097c75a738a7d6d9b5057093ae19513b83bc2bcc7c3021e80865d7"} Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.219328 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-764vx" event={"ID":"e89d0344-56a3-4c17-b647-5d69fc060406","Type":"ContainerStarted","Data":"7d57c671e74868aeb01cb525b5a7afa8c5a059c867018383e8d242715b07f4b2"} Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.239596 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-868bc5fc7c-59nrn" event={"ID":"74387706-b6cb-403d-9455-a8cb80907893","Type":"ContainerStarted","Data":"58e15bc8d2fafbfdb6e380ea4ac7257517385a8600ef853ef43580b3ac7c73a3"} Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.294823 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"539ac286-6fae-4923-b100-f1cd8946c2c2","Type":"ContainerStarted","Data":"99e2e6acfeb411137dc89a202d84c6c185395723173381b5ee0c3bd712ac5f60"} Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.294879 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-bmswz"] Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.294900 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-bmswz"] Feb 23 10:25:05 crc kubenswrapper[4904]: I0223 10:25:05.507343 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 10:25:05 crc kubenswrapper[4904]: W0223 10:25:05.665363 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0276b376_ee0e_45b3_bffc_feedf18e03a8.slice/crio-2cde2f71c6e3f6616bfe47df74631a2d807bb6fd41950ae630f080f9f4df94ca WatchSource:0}: Error finding container 2cde2f71c6e3f6616bfe47df74631a2d807bb6fd41950ae630f080f9f4df94ca: Status 404 returned error can't find the container with id 2cde2f71c6e3f6616bfe47df74631a2d807bb6fd41950ae630f080f9f4df94ca Feb 23 10:25:06 crc kubenswrapper[4904]: I0223 10:25:06.020168 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-544b9cc98f-nzzsf"] Feb 23 10:25:06 crc kubenswrapper[4904]: I0223 10:25:06.290083 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"db916860-b894-4062-a47e-9ca1b6cd8651","Type":"ContainerStarted","Data":"e585d15bab894609bf5b3aeff7c49630d0dd96885d14d88f4da51cba3ab2453f"} Feb 23 10:25:06 crc kubenswrapper[4904]: I0223 10:25:06.290334 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="db916860-b894-4062-a47e-9ca1b6cd8651" containerName="watcher-api-log" containerID="cri-o://3d5be2e81f07031d14d86e47e11df125a1188c501740c7c9338e44c7bb691c35" gracePeriod=30 Feb 23 10:25:06 crc kubenswrapper[4904]: I0223 10:25:06.291789 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="db916860-b894-4062-a47e-9ca1b6cd8651" containerName="watcher-api" containerID="cri-o://e585d15bab894609bf5b3aeff7c49630d0dd96885d14d88f4da51cba3ab2453f" gracePeriod=30 Feb 23 10:25:06 crc kubenswrapper[4904]: I0223 10:25:06.291913 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 23 10:25:06 crc kubenswrapper[4904]: I0223 10:25:06.303748 4904 generic.go:334] "Generic (PLEG): container finished" podID="d84a0369-8741-4751-9659-66afc1b89c8c" containerID="2490e8c8bb2857ee24b2233197ed59db7e2f54018c93e7dc3d25ec756e950fa2" exitCode=0 Feb 23 10:25:06 crc kubenswrapper[4904]: I0223 10:25:06.303856 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" event={"ID":"d84a0369-8741-4751-9659-66afc1b89c8c","Type":"ContainerDied","Data":"2490e8c8bb2857ee24b2233197ed59db7e2f54018c93e7dc3d25ec756e950fa2"} Feb 23 10:25:06 crc kubenswrapper[4904]: I0223 10:25:06.323413 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0276b376-ee0e-45b3-bffc-feedf18e03a8","Type":"ContainerStarted","Data":"2cde2f71c6e3f6616bfe47df74631a2d807bb6fd41950ae630f080f9f4df94ca"} Feb 23 10:25:06 crc kubenswrapper[4904]: I0223 10:25:06.330265 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wg2jr" event={"ID":"216f1de0-6e22-49b7-88aa-256d0d67b014","Type":"ContainerStarted","Data":"84aa3f6606ea41a68aa8473b22009f99bbbdee7d57c5d09f818071bb49dfc892"} Feb 23 10:25:06 crc kubenswrapper[4904]: I0223 10:25:06.349893 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=5.349860092 podStartE2EDuration="5.349860092s" podCreationTimestamp="2026-02-23 10:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:25:06.324249315 +0000 UTC m=+1139.744622828" watchObservedRunningTime="2026-02-23 10:25:06.349860092 +0000 UTC m=+1139.770233605" Feb 23 10:25:06 crc kubenswrapper[4904]: I0223 10:25:06.379010 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="db916860-b894-4062-a47e-9ca1b6cd8651" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": EOF" Feb 23 10:25:06 crc kubenswrapper[4904]: I0223 10:25:06.420373 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-wg2jr" podStartSLOduration=5.420342046 podStartE2EDuration="5.420342046s" podCreationTimestamp="2026-02-23 10:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:25:06.409049905 +0000 UTC m=+1139.829423428" watchObservedRunningTime="2026-02-23 10:25:06.420342046 +0000 UTC m=+1139.840715569" Feb 23 10:25:07 crc kubenswrapper[4904]: I0223 10:25:07.080798 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 23 10:25:07 crc kubenswrapper[4904]: I0223 10:25:07.285345 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e54fe1fc-8fd7-4077-99e3-a4249ae46c70" path="/var/lib/kubelet/pods/e54fe1fc-8fd7-4077-99e3-a4249ae46c70/volumes" Feb 23 10:25:07 crc kubenswrapper[4904]: I0223 10:25:07.390890 4904 generic.go:334] "Generic (PLEG): container finished" podID="db916860-b894-4062-a47e-9ca1b6cd8651" containerID="3d5be2e81f07031d14d86e47e11df125a1188c501740c7c9338e44c7bb691c35" exitCode=143 Feb 23 10:25:07 crc kubenswrapper[4904]: I0223 10:25:07.391010 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"db916860-b894-4062-a47e-9ca1b6cd8651","Type":"ContainerDied","Data":"3d5be2e81f07031d14d86e47e11df125a1188c501740c7c9338e44c7bb691c35"} Feb 23 10:25:07 crc kubenswrapper[4904]: I0223 10:25:07.403400 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b","Type":"ContainerStarted","Data":"a2d3b6ff212114249dca97207510be2ad3a4daf3d4b3c1e80b541d62e0b1fb1a"} Feb 23 10:25:08 crc kubenswrapper[4904]: I0223 10:25:08.424846 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-544b9cc98f-nzzsf" event={"ID":"51427d17-4627-402f-a41d-987ab62579a5","Type":"ContainerStarted","Data":"e72b373c3ca08b1716f8c3801777ced4234f9314425d3c4b447939c55c69dba0"} Feb 23 10:25:09 crc kubenswrapper[4904]: I0223 10:25:09.456230 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0276b376-ee0e-45b3-bffc-feedf18e03a8","Type":"ContainerStarted","Data":"88dd4140ff3b91b42bc30462c7fd96dd12da627ba2e4e22a39e49ee879e16aa7"} Feb 23 10:25:09 crc kubenswrapper[4904]: I0223 10:25:09.463165 4904 generic.go:334] "Generic (PLEG): container finished" podID="6cf70849-f56a-47a0-a26d-a3840f9d314b" containerID="09302921b0f0aa31e810cb906e305c5c7bbc6134c1875b812be5ce7c2ebbba99" exitCode=0 Feb 23 10:25:09 crc kubenswrapper[4904]: I0223 10:25:09.463234 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m47lx" event={"ID":"6cf70849-f56a-47a0-a26d-a3840f9d314b","Type":"ContainerDied","Data":"09302921b0f0aa31e810cb906e305c5c7bbc6134c1875b812be5ce7c2ebbba99"} Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.332319 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-868bc5fc7c-59nrn"] Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.333379 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-56599cf886-x6z6x"] Feb 23 10:25:11 crc kubenswrapper[4904]: E0223 10:25:11.334005 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e54fe1fc-8fd7-4077-99e3-a4249ae46c70" containerName="init" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.334026 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e54fe1fc-8fd7-4077-99e3-a4249ae46c70" containerName="init" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.340954 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e54fe1fc-8fd7-4077-99e3-a4249ae46c70" containerName="init" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.350046 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56599cf886-x6z6x"] Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.350236 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.352930 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.412143 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/16c88a53-6a67-457c-9cce-5fd72203ca30-config-data\") pod \"horizon-56599cf886-x6z6x\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.412372 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c88a53-6a67-457c-9cce-5fd72203ca30-combined-ca-bundle\") pod \"horizon-56599cf886-x6z6x\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.412488 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16c88a53-6a67-457c-9cce-5fd72203ca30-scripts\") pod \"horizon-56599cf886-x6z6x\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.412551 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwsbf\" (UniqueName: \"kubernetes.io/projected/16c88a53-6a67-457c-9cce-5fd72203ca30-kube-api-access-dwsbf\") pod \"horizon-56599cf886-x6z6x\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.412589 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/16c88a53-6a67-457c-9cce-5fd72203ca30-horizon-secret-key\") pod \"horizon-56599cf886-x6z6x\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.412614 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16c88a53-6a67-457c-9cce-5fd72203ca30-logs\") pod \"horizon-56599cf886-x6z6x\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.412649 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/16c88a53-6a67-457c-9cce-5fd72203ca30-horizon-tls-certs\") pod \"horizon-56599cf886-x6z6x\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.440457 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-544b9cc98f-nzzsf"] Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.472562 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7cbb478958-6t4v7"] Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.487628 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.493470 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cbb478958-6t4v7"] Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.515128 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcfqp\" (UniqueName: \"kubernetes.io/projected/e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b-kube-api-access-tcfqp\") pod \"horizon-7cbb478958-6t4v7\" (UID: \"e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b\") " pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.515182 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b-combined-ca-bundle\") pod \"horizon-7cbb478958-6t4v7\" (UID: \"e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b\") " pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.515205 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b-scripts\") pod \"horizon-7cbb478958-6t4v7\" (UID: \"e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b\") " pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.515237 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/16c88a53-6a67-457c-9cce-5fd72203ca30-config-data\") pod \"horizon-56599cf886-x6z6x\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.515351 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c88a53-6a67-457c-9cce-5fd72203ca30-combined-ca-bundle\") pod \"horizon-56599cf886-x6z6x\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.515399 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16c88a53-6a67-457c-9cce-5fd72203ca30-scripts\") pod \"horizon-56599cf886-x6z6x\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.515427 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b-horizon-secret-key\") pod \"horizon-7cbb478958-6t4v7\" (UID: \"e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b\") " pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.515456 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwsbf\" (UniqueName: \"kubernetes.io/projected/16c88a53-6a67-457c-9cce-5fd72203ca30-kube-api-access-dwsbf\") pod \"horizon-56599cf886-x6z6x\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.515485 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b-config-data\") pod \"horizon-7cbb478958-6t4v7\" (UID: \"e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b\") " pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.515505 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/16c88a53-6a67-457c-9cce-5fd72203ca30-horizon-secret-key\") pod \"horizon-56599cf886-x6z6x\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.515526 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16c88a53-6a67-457c-9cce-5fd72203ca30-logs\") pod \"horizon-56599cf886-x6z6x\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.515551 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/16c88a53-6a67-457c-9cce-5fd72203ca30-horizon-tls-certs\") pod \"horizon-56599cf886-x6z6x\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.515614 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b-logs\") pod \"horizon-7cbb478958-6t4v7\" (UID: \"e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b\") " pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.515647 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b-horizon-tls-certs\") pod \"horizon-7cbb478958-6t4v7\" (UID: \"e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b\") " pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.516859 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16c88a53-6a67-457c-9cce-5fd72203ca30-scripts\") pod \"horizon-56599cf886-x6z6x\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.517283 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/16c88a53-6a67-457c-9cce-5fd72203ca30-config-data\") pod \"horizon-56599cf886-x6z6x\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.517281 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16c88a53-6a67-457c-9cce-5fd72203ca30-logs\") pod \"horizon-56599cf886-x6z6x\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.534070 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/16c88a53-6a67-457c-9cce-5fd72203ca30-horizon-secret-key\") pod \"horizon-56599cf886-x6z6x\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.538138 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c88a53-6a67-457c-9cce-5fd72203ca30-combined-ca-bundle\") pod \"horizon-56599cf886-x6z6x\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.539034 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/16c88a53-6a67-457c-9cce-5fd72203ca30-horizon-tls-certs\") pod \"horizon-56599cf886-x6z6x\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.541076 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwsbf\" (UniqueName: \"kubernetes.io/projected/16c88a53-6a67-457c-9cce-5fd72203ca30-kube-api-access-dwsbf\") pod \"horizon-56599cf886-x6z6x\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.617807 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b-logs\") pod \"horizon-7cbb478958-6t4v7\" (UID: \"e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b\") " pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.617889 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b-horizon-tls-certs\") pod \"horizon-7cbb478958-6t4v7\" (UID: \"e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b\") " pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.617989 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcfqp\" (UniqueName: \"kubernetes.io/projected/e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b-kube-api-access-tcfqp\") pod \"horizon-7cbb478958-6t4v7\" (UID: \"e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b\") " pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.618023 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b-combined-ca-bundle\") pod \"horizon-7cbb478958-6t4v7\" (UID: \"e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b\") " pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.618049 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b-scripts\") pod \"horizon-7cbb478958-6t4v7\" (UID: \"e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b\") " pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.618188 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b-horizon-secret-key\") pod \"horizon-7cbb478958-6t4v7\" (UID: \"e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b\") " pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.618243 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b-config-data\") pod \"horizon-7cbb478958-6t4v7\" (UID: \"e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b\") " pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.619850 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b-config-data\") pod \"horizon-7cbb478958-6t4v7\" (UID: \"e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b\") " pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.620289 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b-logs\") pod \"horizon-7cbb478958-6t4v7\" (UID: \"e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b\") " pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.621781 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b-scripts\") pod \"horizon-7cbb478958-6t4v7\" (UID: \"e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b\") " pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.631512 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b-combined-ca-bundle\") pod \"horizon-7cbb478958-6t4v7\" (UID: \"e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b\") " pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.648363 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b-horizon-secret-key\") pod \"horizon-7cbb478958-6t4v7\" (UID: \"e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b\") " pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.672950 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b-horizon-tls-certs\") pod \"horizon-7cbb478958-6t4v7\" (UID: \"e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b\") " pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.688565 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcfqp\" (UniqueName: \"kubernetes.io/projected/e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b-kube-api-access-tcfqp\") pod \"horizon-7cbb478958-6t4v7\" (UID: \"e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b\") " pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.699426 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:11 crc kubenswrapper[4904]: I0223 10:25:11.819192 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:12 crc kubenswrapper[4904]: I0223 10:25:12.122990 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="db916860-b894-4062-a47e-9ca1b6cd8651" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 10:25:12 crc kubenswrapper[4904]: I0223 10:25:12.773175 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="db916860-b894-4062-a47e-9ca1b6cd8651" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": read tcp 10.217.0.2:53960->10.217.0.149:9322: read: connection reset by peer" Feb 23 10:25:13 crc kubenswrapper[4904]: I0223 10:25:13.585247 4904 generic.go:334] "Generic (PLEG): container finished" podID="db916860-b894-4062-a47e-9ca1b6cd8651" containerID="e585d15bab894609bf5b3aeff7c49630d0dd96885d14d88f4da51cba3ab2453f" exitCode=0 Feb 23 10:25:13 crc kubenswrapper[4904]: I0223 10:25:13.585377 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"db916860-b894-4062-a47e-9ca1b6cd8651","Type":"ContainerDied","Data":"e585d15bab894609bf5b3aeff7c49630d0dd96885d14d88f4da51cba3ab2453f"} Feb 23 10:25:14 crc kubenswrapper[4904]: I0223 10:25:14.088790 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m47lx" Feb 23 10:25:14 crc kubenswrapper[4904]: I0223 10:25:14.184434 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-scripts\") pod \"6cf70849-f56a-47a0-a26d-a3840f9d314b\" (UID: \"6cf70849-f56a-47a0-a26d-a3840f9d314b\") " Feb 23 10:25:14 crc kubenswrapper[4904]: I0223 10:25:14.184491 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-credential-keys\") pod \"6cf70849-f56a-47a0-a26d-a3840f9d314b\" (UID: \"6cf70849-f56a-47a0-a26d-a3840f9d314b\") " Feb 23 10:25:14 crc kubenswrapper[4904]: I0223 10:25:14.184512 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-combined-ca-bundle\") pod \"6cf70849-f56a-47a0-a26d-a3840f9d314b\" (UID: \"6cf70849-f56a-47a0-a26d-a3840f9d314b\") " Feb 23 10:25:14 crc kubenswrapper[4904]: I0223 10:25:14.184535 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-config-data\") pod \"6cf70849-f56a-47a0-a26d-a3840f9d314b\" (UID: \"6cf70849-f56a-47a0-a26d-a3840f9d314b\") " Feb 23 10:25:14 crc kubenswrapper[4904]: I0223 10:25:14.184564 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-fernet-keys\") pod \"6cf70849-f56a-47a0-a26d-a3840f9d314b\" (UID: \"6cf70849-f56a-47a0-a26d-a3840f9d314b\") " Feb 23 10:25:14 crc kubenswrapper[4904]: I0223 10:25:14.184613 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrjkl\" (UniqueName: \"kubernetes.io/projected/6cf70849-f56a-47a0-a26d-a3840f9d314b-kube-api-access-vrjkl\") pod \"6cf70849-f56a-47a0-a26d-a3840f9d314b\" (UID: \"6cf70849-f56a-47a0-a26d-a3840f9d314b\") " Feb 23 10:25:14 crc kubenswrapper[4904]: I0223 10:25:14.192934 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6cf70849-f56a-47a0-a26d-a3840f9d314b" (UID: "6cf70849-f56a-47a0-a26d-a3840f9d314b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:14 crc kubenswrapper[4904]: I0223 10:25:14.193809 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6cf70849-f56a-47a0-a26d-a3840f9d314b" (UID: "6cf70849-f56a-47a0-a26d-a3840f9d314b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:14 crc kubenswrapper[4904]: I0223 10:25:14.200906 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-scripts" (OuterVolumeSpecName: "scripts") pod "6cf70849-f56a-47a0-a26d-a3840f9d314b" (UID: "6cf70849-f56a-47a0-a26d-a3840f9d314b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:14 crc kubenswrapper[4904]: I0223 10:25:14.211851 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cf70849-f56a-47a0-a26d-a3840f9d314b-kube-api-access-vrjkl" (OuterVolumeSpecName: "kube-api-access-vrjkl") pod "6cf70849-f56a-47a0-a26d-a3840f9d314b" (UID: "6cf70849-f56a-47a0-a26d-a3840f9d314b"). InnerVolumeSpecName "kube-api-access-vrjkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:25:14 crc kubenswrapper[4904]: I0223 10:25:14.220959 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6cf70849-f56a-47a0-a26d-a3840f9d314b" (UID: "6cf70849-f56a-47a0-a26d-a3840f9d314b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:14 crc kubenswrapper[4904]: I0223 10:25:14.225061 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-config-data" (OuterVolumeSpecName: "config-data") pod "6cf70849-f56a-47a0-a26d-a3840f9d314b" (UID: "6cf70849-f56a-47a0-a26d-a3840f9d314b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:14 crc kubenswrapper[4904]: I0223 10:25:14.286858 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:14 crc kubenswrapper[4904]: I0223 10:25:14.286899 4904 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:14 crc kubenswrapper[4904]: I0223 10:25:14.286916 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:14 crc kubenswrapper[4904]: I0223 10:25:14.286932 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:14 crc kubenswrapper[4904]: I0223 10:25:14.286947 4904 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6cf70849-f56a-47a0-a26d-a3840f9d314b-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:14 crc kubenswrapper[4904]: I0223 10:25:14.286959 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrjkl\" (UniqueName: \"kubernetes.io/projected/6cf70849-f56a-47a0-a26d-a3840f9d314b-kube-api-access-vrjkl\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:14 crc kubenswrapper[4904]: I0223 10:25:14.600441 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m47lx" event={"ID":"6cf70849-f56a-47a0-a26d-a3840f9d314b","Type":"ContainerDied","Data":"ed1cbebc20a4d6cd2b8f26145f417f042f3afd0cf20a5a80aa2138ddb89e4001"} Feb 23 10:25:14 crc kubenswrapper[4904]: I0223 10:25:14.600513 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed1cbebc20a4d6cd2b8f26145f417f042f3afd0cf20a5a80aa2138ddb89e4001" Feb 23 10:25:14 crc kubenswrapper[4904]: I0223 10:25:14.600624 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m47lx" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.186409 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-m47lx"] Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.198064 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-m47lx"] Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.274957 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cf70849-f56a-47a0-a26d-a3840f9d314b" path="/var/lib/kubelet/pods/6cf70849-f56a-47a0-a26d-a3840f9d314b/volumes" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.309269 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-72x5h"] Feb 23 10:25:15 crc kubenswrapper[4904]: E0223 10:25:15.310114 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cf70849-f56a-47a0-a26d-a3840f9d314b" containerName="keystone-bootstrap" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.310158 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cf70849-f56a-47a0-a26d-a3840f9d314b" containerName="keystone-bootstrap" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.310451 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cf70849-f56a-47a0-a26d-a3840f9d314b" containerName="keystone-bootstrap" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.311466 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-72x5h" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.315238 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.315697 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.315849 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.316512 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-c598d" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.317014 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.342362 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-72x5h"] Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.414581 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-scripts\") pod \"keystone-bootstrap-72x5h\" (UID: \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\") " pod="openstack/keystone-bootstrap-72x5h" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.415102 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-credential-keys\") pod \"keystone-bootstrap-72x5h\" (UID: \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\") " pod="openstack/keystone-bootstrap-72x5h" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.415184 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-combined-ca-bundle\") pod \"keystone-bootstrap-72x5h\" (UID: \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\") " pod="openstack/keystone-bootstrap-72x5h" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.415378 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-config-data\") pod \"keystone-bootstrap-72x5h\" (UID: \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\") " pod="openstack/keystone-bootstrap-72x5h" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.415435 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhqff\" (UniqueName: \"kubernetes.io/projected/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-kube-api-access-lhqff\") pod \"keystone-bootstrap-72x5h\" (UID: \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\") " pod="openstack/keystone-bootstrap-72x5h" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.415468 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-fernet-keys\") pod \"keystone-bootstrap-72x5h\" (UID: \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\") " pod="openstack/keystone-bootstrap-72x5h" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.518118 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-combined-ca-bundle\") pod \"keystone-bootstrap-72x5h\" (UID: \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\") " pod="openstack/keystone-bootstrap-72x5h" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.518298 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-config-data\") pod \"keystone-bootstrap-72x5h\" (UID: \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\") " pod="openstack/keystone-bootstrap-72x5h" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.518338 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhqff\" (UniqueName: \"kubernetes.io/projected/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-kube-api-access-lhqff\") pod \"keystone-bootstrap-72x5h\" (UID: \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\") " pod="openstack/keystone-bootstrap-72x5h" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.518363 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-fernet-keys\") pod \"keystone-bootstrap-72x5h\" (UID: \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\") " pod="openstack/keystone-bootstrap-72x5h" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.518901 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-scripts\") pod \"keystone-bootstrap-72x5h\" (UID: \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\") " pod="openstack/keystone-bootstrap-72x5h" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.519956 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-credential-keys\") pod \"keystone-bootstrap-72x5h\" (UID: \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\") " pod="openstack/keystone-bootstrap-72x5h" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.525375 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-scripts\") pod \"keystone-bootstrap-72x5h\" (UID: \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\") " pod="openstack/keystone-bootstrap-72x5h" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.525697 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-config-data\") pod \"keystone-bootstrap-72x5h\" (UID: \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\") " pod="openstack/keystone-bootstrap-72x5h" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.526132 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-fernet-keys\") pod \"keystone-bootstrap-72x5h\" (UID: \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\") " pod="openstack/keystone-bootstrap-72x5h" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.526851 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-combined-ca-bundle\") pod \"keystone-bootstrap-72x5h\" (UID: \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\") " pod="openstack/keystone-bootstrap-72x5h" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.531693 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-credential-keys\") pod \"keystone-bootstrap-72x5h\" (UID: \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\") " pod="openstack/keystone-bootstrap-72x5h" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.537343 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhqff\" (UniqueName: \"kubernetes.io/projected/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-kube-api-access-lhqff\") pod \"keystone-bootstrap-72x5h\" (UID: \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\") " pod="openstack/keystone-bootstrap-72x5h" Feb 23 10:25:15 crc kubenswrapper[4904]: I0223 10:25:15.651284 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-72x5h" Feb 23 10:25:17 crc kubenswrapper[4904]: I0223 10:25:17.398477 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:25:17 crc kubenswrapper[4904]: I0223 10:25:17.398882 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:25:21 crc kubenswrapper[4904]: I0223 10:25:21.705106 4904 generic.go:334] "Generic (PLEG): container finished" podID="216f1de0-6e22-49b7-88aa-256d0d67b014" containerID="84aa3f6606ea41a68aa8473b22009f99bbbdee7d57c5d09f818071bb49dfc892" exitCode=0 Feb 23 10:25:21 crc kubenswrapper[4904]: I0223 10:25:21.705192 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wg2jr" event={"ID":"216f1de0-6e22-49b7-88aa-256d0d67b014","Type":"ContainerDied","Data":"84aa3f6606ea41a68aa8473b22009f99bbbdee7d57c5d09f818071bb49dfc892"} Feb 23 10:25:22 crc kubenswrapper[4904]: I0223 10:25:22.081576 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="db916860-b894-4062-a47e-9ca1b6cd8651" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 10:25:22 crc kubenswrapper[4904]: E0223 10:25:22.285898 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 23 10:25:22 crc kubenswrapper[4904]: E0223 10:25:22.286111 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n697h5chc5hc4h684h695h668hd7h5dchc8h57fh684h644h679h4h98h658h549h646h54chfdh647h5h58hc9h56bh5d8h668h95h59bh588hbfq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w57fj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-544c54c55c-96gjr_openstack(1af33f06-f5a1-4478-9632-15433df32786): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 10:25:22 crc kubenswrapper[4904]: E0223 10:25:22.312608 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 23 10:25:22 crc kubenswrapper[4904]: E0223 10:25:22.312883 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n69h78h54ch5f8h98h68chb7h667h59dh6bh678h5cchcch5bhc9h56dh65h58hdh55h9fh648h56chd4h5bbh659h67fh657h566h7chd9h5b6q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bqml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-868bc5fc7c-59nrn_openstack(74387706-b6cb-403d-9455-a8cb80907893): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 10:25:22 crc kubenswrapper[4904]: E0223 10:25:22.330888 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-868bc5fc7c-59nrn" podUID="74387706-b6cb-403d-9455-a8cb80907893" Feb 23 10:25:22 crc kubenswrapper[4904]: E0223 10:25:22.330888 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-544c54c55c-96gjr" podUID="1af33f06-f5a1-4478-9632-15433df32786" Feb 23 10:25:27 crc kubenswrapper[4904]: I0223 10:25:27.082322 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="db916860-b894-4062-a47e-9ca1b6cd8651" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 10:25:31 crc kubenswrapper[4904]: E0223 10:25:31.019557 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 23 10:25:31 crc kubenswrapper[4904]: E0223 10:25:31.020065 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6cn9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-764vx_openstack(e89d0344-56a3-4c17-b647-5d69fc060406): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 10:25:31 crc kubenswrapper[4904]: E0223 10:25:31.021172 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-764vx" podUID="e89d0344-56a3-4c17-b647-5d69fc060406" Feb 23 10:25:31 crc kubenswrapper[4904]: E0223 10:25:31.410575 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 23 10:25:31 crc kubenswrapper[4904]: E0223 10:25:31.411270 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c4h57bh64ch56h5cdh7ch546h87h688h5fh578h68hf9h64dh67fh5bdh649h67h6ch79h5b8h654h9ch547h5c9h5cch657h7ch5c5h54dhcdh564q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l6wlw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(33f04219-ca0b-4cc3-86c5-67e0ceaf18ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.577985 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.584315 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-544c54c55c-96gjr" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.594377 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-868bc5fc7c-59nrn" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.610358 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wg2jr" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.664832 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db916860-b894-4062-a47e-9ca1b6cd8651-config-data\") pod \"db916860-b894-4062-a47e-9ca1b6cd8651\" (UID: \"db916860-b894-4062-a47e-9ca1b6cd8651\") " Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.664927 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w57fj\" (UniqueName: \"kubernetes.io/projected/1af33f06-f5a1-4478-9632-15433df32786-kube-api-access-w57fj\") pod \"1af33f06-f5a1-4478-9632-15433df32786\" (UID: \"1af33f06-f5a1-4478-9632-15433df32786\") " Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.664963 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1af33f06-f5a1-4478-9632-15433df32786-horizon-secret-key\") pod \"1af33f06-f5a1-4478-9632-15433df32786\" (UID: \"1af33f06-f5a1-4478-9632-15433df32786\") " Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.665005 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db916860-b894-4062-a47e-9ca1b6cd8651-combined-ca-bundle\") pod \"db916860-b894-4062-a47e-9ca1b6cd8651\" (UID: \"db916860-b894-4062-a47e-9ca1b6cd8651\") " Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.665041 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/db916860-b894-4062-a47e-9ca1b6cd8651-custom-prometheus-ca\") pod \"db916860-b894-4062-a47e-9ca1b6cd8651\" (UID: \"db916860-b894-4062-a47e-9ca1b6cd8651\") " Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.665124 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74387706-b6cb-403d-9455-a8cb80907893-logs\") pod \"74387706-b6cb-403d-9455-a8cb80907893\" (UID: \"74387706-b6cb-403d-9455-a8cb80907893\") " Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.665180 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhq89\" (UniqueName: \"kubernetes.io/projected/216f1de0-6e22-49b7-88aa-256d0d67b014-kube-api-access-hhq89\") pod \"216f1de0-6e22-49b7-88aa-256d0d67b014\" (UID: \"216f1de0-6e22-49b7-88aa-256d0d67b014\") " Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.665468 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74387706-b6cb-403d-9455-a8cb80907893-config-data\") pod \"74387706-b6cb-403d-9455-a8cb80907893\" (UID: \"74387706-b6cb-403d-9455-a8cb80907893\") " Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.665520 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216f1de0-6e22-49b7-88aa-256d0d67b014-combined-ca-bundle\") pod \"216f1de0-6e22-49b7-88aa-256d0d67b014\" (UID: \"216f1de0-6e22-49b7-88aa-256d0d67b014\") " Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.665599 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlblm\" (UniqueName: \"kubernetes.io/projected/db916860-b894-4062-a47e-9ca1b6cd8651-kube-api-access-tlblm\") pod \"db916860-b894-4062-a47e-9ca1b6cd8651\" (UID: \"db916860-b894-4062-a47e-9ca1b6cd8651\") " Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.665635 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74387706-b6cb-403d-9455-a8cb80907893-scripts\") pod \"74387706-b6cb-403d-9455-a8cb80907893\" (UID: \"74387706-b6cb-403d-9455-a8cb80907893\") " Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.665664 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1af33f06-f5a1-4478-9632-15433df32786-scripts\") pod \"1af33f06-f5a1-4478-9632-15433df32786\" (UID: \"1af33f06-f5a1-4478-9632-15433df32786\") " Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.665809 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db916860-b894-4062-a47e-9ca1b6cd8651-logs\") pod \"db916860-b894-4062-a47e-9ca1b6cd8651\" (UID: \"db916860-b894-4062-a47e-9ca1b6cd8651\") " Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.665876 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1af33f06-f5a1-4478-9632-15433df32786-config-data\") pod \"1af33f06-f5a1-4478-9632-15433df32786\" (UID: \"1af33f06-f5a1-4478-9632-15433df32786\") " Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.665923 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74387706-b6cb-403d-9455-a8cb80907893-horizon-secret-key\") pod \"74387706-b6cb-403d-9455-a8cb80907893\" (UID: \"74387706-b6cb-403d-9455-a8cb80907893\") " Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.665955 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1af33f06-f5a1-4478-9632-15433df32786-logs\") pod \"1af33f06-f5a1-4478-9632-15433df32786\" (UID: \"1af33f06-f5a1-4478-9632-15433df32786\") " Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.665981 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/216f1de0-6e22-49b7-88aa-256d0d67b014-config\") pod \"216f1de0-6e22-49b7-88aa-256d0d67b014\" (UID: \"216f1de0-6e22-49b7-88aa-256d0d67b014\") " Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.666051 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bqml\" (UniqueName: \"kubernetes.io/projected/74387706-b6cb-403d-9455-a8cb80907893-kube-api-access-6bqml\") pod \"74387706-b6cb-403d-9455-a8cb80907893\" (UID: \"74387706-b6cb-403d-9455-a8cb80907893\") " Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.670906 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74387706-b6cb-403d-9455-a8cb80907893-logs" (OuterVolumeSpecName: "logs") pod "74387706-b6cb-403d-9455-a8cb80907893" (UID: "74387706-b6cb-403d-9455-a8cb80907893"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.671803 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db916860-b894-4062-a47e-9ca1b6cd8651-logs" (OuterVolumeSpecName: "logs") pod "db916860-b894-4062-a47e-9ca1b6cd8651" (UID: "db916860-b894-4062-a47e-9ca1b6cd8651"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.672008 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74387706-b6cb-403d-9455-a8cb80907893-config-data" (OuterVolumeSpecName: "config-data") pod "74387706-b6cb-403d-9455-a8cb80907893" (UID: "74387706-b6cb-403d-9455-a8cb80907893"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.675596 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74387706-b6cb-403d-9455-a8cb80907893-kube-api-access-6bqml" (OuterVolumeSpecName: "kube-api-access-6bqml") pod "74387706-b6cb-403d-9455-a8cb80907893" (UID: "74387706-b6cb-403d-9455-a8cb80907893"). InnerVolumeSpecName "kube-api-access-6bqml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.675966 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1af33f06-f5a1-4478-9632-15433df32786-logs" (OuterVolumeSpecName: "logs") pod "1af33f06-f5a1-4478-9632-15433df32786" (UID: "1af33f06-f5a1-4478-9632-15433df32786"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.676430 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/216f1de0-6e22-49b7-88aa-256d0d67b014-kube-api-access-hhq89" (OuterVolumeSpecName: "kube-api-access-hhq89") pod "216f1de0-6e22-49b7-88aa-256d0d67b014" (UID: "216f1de0-6e22-49b7-88aa-256d0d67b014"). InnerVolumeSpecName "kube-api-access-hhq89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.676438 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1af33f06-f5a1-4478-9632-15433df32786-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1af33f06-f5a1-4478-9632-15433df32786" (UID: "1af33f06-f5a1-4478-9632-15433df32786"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.676925 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1af33f06-f5a1-4478-9632-15433df32786-scripts" (OuterVolumeSpecName: "scripts") pod "1af33f06-f5a1-4478-9632-15433df32786" (UID: "1af33f06-f5a1-4478-9632-15433df32786"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.676953 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74387706-b6cb-403d-9455-a8cb80907893-scripts" (OuterVolumeSpecName: "scripts") pod "74387706-b6cb-403d-9455-a8cb80907893" (UID: "74387706-b6cb-403d-9455-a8cb80907893"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.677177 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1af33f06-f5a1-4478-9632-15433df32786-config-data" (OuterVolumeSpecName: "config-data") pod "1af33f06-f5a1-4478-9632-15433df32786" (UID: "1af33f06-f5a1-4478-9632-15433df32786"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.682777 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74387706-b6cb-403d-9455-a8cb80907893-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "74387706-b6cb-403d-9455-a8cb80907893" (UID: "74387706-b6cb-403d-9455-a8cb80907893"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.682791 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db916860-b894-4062-a47e-9ca1b6cd8651-kube-api-access-tlblm" (OuterVolumeSpecName: "kube-api-access-tlblm") pod "db916860-b894-4062-a47e-9ca1b6cd8651" (UID: "db916860-b894-4062-a47e-9ca1b6cd8651"). InnerVolumeSpecName "kube-api-access-tlblm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.700164 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1af33f06-f5a1-4478-9632-15433df32786-kube-api-access-w57fj" (OuterVolumeSpecName: "kube-api-access-w57fj") pod "1af33f06-f5a1-4478-9632-15433df32786" (UID: "1af33f06-f5a1-4478-9632-15433df32786"). InnerVolumeSpecName "kube-api-access-w57fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.717137 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db916860-b894-4062-a47e-9ca1b6cd8651-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db916860-b894-4062-a47e-9ca1b6cd8651" (UID: "db916860-b894-4062-a47e-9ca1b6cd8651"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:31 crc kubenswrapper[4904]: E0223 10:25:31.722898 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/216f1de0-6e22-49b7-88aa-256d0d67b014-config podName:216f1de0-6e22-49b7-88aa-256d0d67b014 nodeName:}" failed. No retries permitted until 2026-02-23 10:25:32.222760116 +0000 UTC m=+1165.643133629 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config" (UniqueName: "kubernetes.io/secret/216f1de0-6e22-49b7-88aa-256d0d67b014-config") pod "216f1de0-6e22-49b7-88aa-256d0d67b014" (UID: "216f1de0-6e22-49b7-88aa-256d0d67b014") : error deleting /var/lib/kubelet/pods/216f1de0-6e22-49b7-88aa-256d0d67b014/volume-subpaths: remove /var/lib/kubelet/pods/216f1de0-6e22-49b7-88aa-256d0d67b014/volume-subpaths: no such file or directory Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.723379 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db916860-b894-4062-a47e-9ca1b6cd8651-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "db916860-b894-4062-a47e-9ca1b6cd8651" (UID: "db916860-b894-4062-a47e-9ca1b6cd8651"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.743284 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216f1de0-6e22-49b7-88aa-256d0d67b014-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "216f1de0-6e22-49b7-88aa-256d0d67b014" (UID: "216f1de0-6e22-49b7-88aa-256d0d67b014"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.770423 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1af33f06-f5a1-4478-9632-15433df32786-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.770461 4904 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74387706-b6cb-403d-9455-a8cb80907893-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.770474 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1af33f06-f5a1-4478-9632-15433df32786-logs\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.770485 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bqml\" (UniqueName: \"kubernetes.io/projected/74387706-b6cb-403d-9455-a8cb80907893-kube-api-access-6bqml\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.770494 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w57fj\" (UniqueName: \"kubernetes.io/projected/1af33f06-f5a1-4478-9632-15433df32786-kube-api-access-w57fj\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.770507 4904 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1af33f06-f5a1-4478-9632-15433df32786-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.770517 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db916860-b894-4062-a47e-9ca1b6cd8651-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.770526 4904 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/db916860-b894-4062-a47e-9ca1b6cd8651-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.770534 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74387706-b6cb-403d-9455-a8cb80907893-logs\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.770542 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhq89\" (UniqueName: \"kubernetes.io/projected/216f1de0-6e22-49b7-88aa-256d0d67b014-kube-api-access-hhq89\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.770552 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74387706-b6cb-403d-9455-a8cb80907893-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.770560 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216f1de0-6e22-49b7-88aa-256d0d67b014-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.770571 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlblm\" (UniqueName: \"kubernetes.io/projected/db916860-b894-4062-a47e-9ca1b6cd8651-kube-api-access-tlblm\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.770580 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74387706-b6cb-403d-9455-a8cb80907893-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.770588 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1af33f06-f5a1-4478-9632-15433df32786-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.770596 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db916860-b894-4062-a47e-9ca1b6cd8651-logs\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.770796 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db916860-b894-4062-a47e-9ca1b6cd8651-config-data" (OuterVolumeSpecName: "config-data") pod "db916860-b894-4062-a47e-9ca1b6cd8651" (UID: "db916860-b894-4062-a47e-9ca1b6cd8651"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.809640 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wg2jr" event={"ID":"216f1de0-6e22-49b7-88aa-256d0d67b014","Type":"ContainerDied","Data":"9009c025b11667883e7fd839992a619d8ab864959290a81af83392fedbdcf9a6"} Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.809702 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wg2jr" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.809751 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9009c025b11667883e7fd839992a619d8ab864959290a81af83392fedbdcf9a6" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.812268 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"db916860-b894-4062-a47e-9ca1b6cd8651","Type":"ContainerDied","Data":"16261f56c3097c75a738a7d6d9b5057093ae19513b83bc2bcc7c3021e80865d7"} Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.812307 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.812331 4904 scope.go:117] "RemoveContainer" containerID="e585d15bab894609bf5b3aeff7c49630d0dd96885d14d88f4da51cba3ab2453f" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.813845 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-544c54c55c-96gjr" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.813884 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-544c54c55c-96gjr" event={"ID":"1af33f06-f5a1-4478-9632-15433df32786","Type":"ContainerDied","Data":"417955ead4c74036477603ff4e1dd68e79c97a5dcb96c86176e1ba7a7b8fc00f"} Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.815113 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-868bc5fc7c-59nrn" event={"ID":"74387706-b6cb-403d-9455-a8cb80907893","Type":"ContainerDied","Data":"58e15bc8d2fafbfdb6e380ea4ac7257517385a8600ef853ef43580b3ac7c73a3"} Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.815162 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-868bc5fc7c-59nrn" Feb 23 10:25:31 crc kubenswrapper[4904]: E0223 10:25:31.818969 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-764vx" podUID="e89d0344-56a3-4c17-b647-5d69fc060406" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.873173 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db916860-b894-4062-a47e-9ca1b6cd8651-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.899428 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-544c54c55c-96gjr"] Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.925803 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-544c54c55c-96gjr"] Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.965201 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-868bc5fc7c-59nrn"] Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.978000 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-868bc5fc7c-59nrn"] Feb 23 10:25:31 crc kubenswrapper[4904]: I0223 10:25:31.994483 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56599cf886-x6z6x"] Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.004644 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.014647 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.025618 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 23 10:25:32 crc kubenswrapper[4904]: E0223 10:25:32.026259 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db916860-b894-4062-a47e-9ca1b6cd8651" containerName="watcher-api" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.026281 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="db916860-b894-4062-a47e-9ca1b6cd8651" containerName="watcher-api" Feb 23 10:25:32 crc kubenswrapper[4904]: E0223 10:25:32.026299 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db916860-b894-4062-a47e-9ca1b6cd8651" containerName="watcher-api-log" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.026307 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="db916860-b894-4062-a47e-9ca1b6cd8651" containerName="watcher-api-log" Feb 23 10:25:32 crc kubenswrapper[4904]: E0223 10:25:32.026326 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216f1de0-6e22-49b7-88aa-256d0d67b014" containerName="neutron-db-sync" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.026334 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="216f1de0-6e22-49b7-88aa-256d0d67b014" containerName="neutron-db-sync" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.026849 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="216f1de0-6e22-49b7-88aa-256d0d67b014" containerName="neutron-db-sync" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.026902 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="db916860-b894-4062-a47e-9ca1b6cd8651" containerName="watcher-api" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.026914 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="db916860-b894-4062-a47e-9ca1b6cd8651" containerName="watcher-api-log" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.028306 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.031389 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.035553 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.076841 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67bcf22-c071-482b-8059-aac23cfb59ac-config-data\") pod \"watcher-api-0\" (UID: \"e67bcf22-c071-482b-8059-aac23cfb59ac\") " pod="openstack/watcher-api-0" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.076949 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67bcf22-c071-482b-8059-aac23cfb59ac-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"e67bcf22-c071-482b-8059-aac23cfb59ac\") " pod="openstack/watcher-api-0" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.077251 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jh9k\" (UniqueName: \"kubernetes.io/projected/e67bcf22-c071-482b-8059-aac23cfb59ac-kube-api-access-4jh9k\") pod \"watcher-api-0\" (UID: \"e67bcf22-c071-482b-8059-aac23cfb59ac\") " pod="openstack/watcher-api-0" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.077282 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e67bcf22-c071-482b-8059-aac23cfb59ac-logs\") pod \"watcher-api-0\" (UID: \"e67bcf22-c071-482b-8059-aac23cfb59ac\") " pod="openstack/watcher-api-0" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.077311 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e67bcf22-c071-482b-8059-aac23cfb59ac-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"e67bcf22-c071-482b-8059-aac23cfb59ac\") " pod="openstack/watcher-api-0" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.083202 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="db916860-b894-4062-a47e-9ca1b6cd8651" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.178244 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jh9k\" (UniqueName: \"kubernetes.io/projected/e67bcf22-c071-482b-8059-aac23cfb59ac-kube-api-access-4jh9k\") pod \"watcher-api-0\" (UID: \"e67bcf22-c071-482b-8059-aac23cfb59ac\") " pod="openstack/watcher-api-0" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.178309 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e67bcf22-c071-482b-8059-aac23cfb59ac-logs\") pod \"watcher-api-0\" (UID: \"e67bcf22-c071-482b-8059-aac23cfb59ac\") " pod="openstack/watcher-api-0" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.178339 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e67bcf22-c071-482b-8059-aac23cfb59ac-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"e67bcf22-c071-482b-8059-aac23cfb59ac\") " pod="openstack/watcher-api-0" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.178414 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67bcf22-c071-482b-8059-aac23cfb59ac-config-data\") pod \"watcher-api-0\" (UID: \"e67bcf22-c071-482b-8059-aac23cfb59ac\") " pod="openstack/watcher-api-0" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.178467 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67bcf22-c071-482b-8059-aac23cfb59ac-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"e67bcf22-c071-482b-8059-aac23cfb59ac\") " pod="openstack/watcher-api-0" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.179909 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e67bcf22-c071-482b-8059-aac23cfb59ac-logs\") pod \"watcher-api-0\" (UID: \"e67bcf22-c071-482b-8059-aac23cfb59ac\") " pod="openstack/watcher-api-0" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.184542 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e67bcf22-c071-482b-8059-aac23cfb59ac-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"e67bcf22-c071-482b-8059-aac23cfb59ac\") " pod="openstack/watcher-api-0" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.184970 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67bcf22-c071-482b-8059-aac23cfb59ac-config-data\") pod \"watcher-api-0\" (UID: \"e67bcf22-c071-482b-8059-aac23cfb59ac\") " pod="openstack/watcher-api-0" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.185111 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67bcf22-c071-482b-8059-aac23cfb59ac-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"e67bcf22-c071-482b-8059-aac23cfb59ac\") " pod="openstack/watcher-api-0" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.205970 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jh9k\" (UniqueName: \"kubernetes.io/projected/e67bcf22-c071-482b-8059-aac23cfb59ac-kube-api-access-4jh9k\") pod \"watcher-api-0\" (UID: \"e67bcf22-c071-482b-8059-aac23cfb59ac\") " pod="openstack/watcher-api-0" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.280269 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/216f1de0-6e22-49b7-88aa-256d0d67b014-config\") pod \"216f1de0-6e22-49b7-88aa-256d0d67b014\" (UID: \"216f1de0-6e22-49b7-88aa-256d0d67b014\") " Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.286106 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216f1de0-6e22-49b7-88aa-256d0d67b014-config" (OuterVolumeSpecName: "config") pod "216f1de0-6e22-49b7-88aa-256d0d67b014" (UID: "216f1de0-6e22-49b7-88aa-256d0d67b014"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.354668 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.382349 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/216f1de0-6e22-49b7-88aa-256d0d67b014-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.950443 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-wq2q2"] Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.989331 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-gp5rq"] Feb 23 10:25:32 crc kubenswrapper[4904]: I0223 10:25:32.991565 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.000308 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-dns-svc\") pod \"dnsmasq-dns-55f844cf75-gp5rq\" (UID: \"45fe4c1b-753c-434c-a67a-0022f6109980\") " pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.000415 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-gp5rq\" (UID: \"45fe4c1b-753c-434c-a67a-0022f6109980\") " pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.000474 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-config\") pod \"dnsmasq-dns-55f844cf75-gp5rq\" (UID: \"45fe4c1b-753c-434c-a67a-0022f6109980\") " pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.002956 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-gp5rq\" (UID: \"45fe4c1b-753c-434c-a67a-0022f6109980\") " pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.003040 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmdjq\" (UniqueName: \"kubernetes.io/projected/45fe4c1b-753c-434c-a67a-0022f6109980-kube-api-access-cmdjq\") pod \"dnsmasq-dns-55f844cf75-gp5rq\" (UID: \"45fe4c1b-753c-434c-a67a-0022f6109980\") " pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.003529 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-gp5rq\" (UID: \"45fe4c1b-753c-434c-a67a-0022f6109980\") " pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.084746 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-gp5rq"] Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.129681 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-gp5rq\" (UID: \"45fe4c1b-753c-434c-a67a-0022f6109980\") " pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.130263 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-dns-svc\") pod \"dnsmasq-dns-55f844cf75-gp5rq\" (UID: \"45fe4c1b-753c-434c-a67a-0022f6109980\") " pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.130355 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-gp5rq\" (UID: \"45fe4c1b-753c-434c-a67a-0022f6109980\") " pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.130439 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-config\") pod \"dnsmasq-dns-55f844cf75-gp5rq\" (UID: \"45fe4c1b-753c-434c-a67a-0022f6109980\") " pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.130469 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-gp5rq\" (UID: \"45fe4c1b-753c-434c-a67a-0022f6109980\") " pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.130500 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmdjq\" (UniqueName: \"kubernetes.io/projected/45fe4c1b-753c-434c-a67a-0022f6109980-kube-api-access-cmdjq\") pod \"dnsmasq-dns-55f844cf75-gp5rq\" (UID: \"45fe4c1b-753c-434c-a67a-0022f6109980\") " pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.132342 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-gp5rq\" (UID: \"45fe4c1b-753c-434c-a67a-0022f6109980\") " pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.144495 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-dns-svc\") pod \"dnsmasq-dns-55f844cf75-gp5rq\" (UID: \"45fe4c1b-753c-434c-a67a-0022f6109980\") " pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.161061 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-config\") pod \"dnsmasq-dns-55f844cf75-gp5rq\" (UID: \"45fe4c1b-753c-434c-a67a-0022f6109980\") " pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.164342 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-gp5rq\" (UID: \"45fe4c1b-753c-434c-a67a-0022f6109980\") " pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.168300 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-gp5rq\" (UID: \"45fe4c1b-753c-434c-a67a-0022f6109980\") " pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.170001 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmdjq\" (UniqueName: \"kubernetes.io/projected/45fe4c1b-753c-434c-a67a-0022f6109980-kube-api-access-cmdjq\") pod \"dnsmasq-dns-55f844cf75-gp5rq\" (UID: \"45fe4c1b-753c-434c-a67a-0022f6109980\") " pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" Feb 23 10:25:33 crc kubenswrapper[4904]: E0223 10:25:33.180165 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 23 10:25:33 crc kubenswrapper[4904]: E0223 10:25:33.180554 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqch9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-8zkpf_openstack(25955027-6da1-4cce-8074-f079cf65f840): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 10:25:33 crc kubenswrapper[4904]: E0223 10:25:33.182063 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-8zkpf" podUID="25955027-6da1-4cce-8074-f079cf65f840" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.196104 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8656f88b4-fzcg6"] Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.199414 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8656f88b4-fzcg6" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.203209 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-5xqx5" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.203381 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.205939 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.206239 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.207840 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8656f88b4-fzcg6"] Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.216172 4904 scope.go:117] "RemoveContainer" containerID="3d5be2e81f07031d14d86e47e11df125a1188c501740c7c9338e44c7bb691c35" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.233133 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3b57aa5-707b-41f7-af28-34a33cb8e84e-ovndb-tls-certs\") pod \"neutron-8656f88b4-fzcg6\" (UID: \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\") " pod="openstack/neutron-8656f88b4-fzcg6" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.233243 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3b57aa5-707b-41f7-af28-34a33cb8e84e-config\") pod \"neutron-8656f88b4-fzcg6\" (UID: \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\") " pod="openstack/neutron-8656f88b4-fzcg6" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.233263 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b57aa5-707b-41f7-af28-34a33cb8e84e-combined-ca-bundle\") pod \"neutron-8656f88b4-fzcg6\" (UID: \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\") " pod="openstack/neutron-8656f88b4-fzcg6" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.233298 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dbll\" (UniqueName: \"kubernetes.io/projected/e3b57aa5-707b-41f7-af28-34a33cb8e84e-kube-api-access-4dbll\") pod \"neutron-8656f88b4-fzcg6\" (UID: \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\") " pod="openstack/neutron-8656f88b4-fzcg6" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.233316 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e3b57aa5-707b-41f7-af28-34a33cb8e84e-httpd-config\") pod \"neutron-8656f88b4-fzcg6\" (UID: \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\") " pod="openstack/neutron-8656f88b4-fzcg6" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.292667 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1af33f06-f5a1-4478-9632-15433df32786" path="/var/lib/kubelet/pods/1af33f06-f5a1-4478-9632-15433df32786/volumes" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.293162 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74387706-b6cb-403d-9455-a8cb80907893" path="/var/lib/kubelet/pods/74387706-b6cb-403d-9455-a8cb80907893/volumes" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.293586 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db916860-b894-4062-a47e-9ca1b6cd8651" path="/var/lib/kubelet/pods/db916860-b894-4062-a47e-9ca1b6cd8651/volumes" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.326647 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.336887 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3b57aa5-707b-41f7-af28-34a33cb8e84e-config\") pod \"neutron-8656f88b4-fzcg6\" (UID: \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\") " pod="openstack/neutron-8656f88b4-fzcg6" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.340865 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b57aa5-707b-41f7-af28-34a33cb8e84e-combined-ca-bundle\") pod \"neutron-8656f88b4-fzcg6\" (UID: \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\") " pod="openstack/neutron-8656f88b4-fzcg6" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.340990 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dbll\" (UniqueName: \"kubernetes.io/projected/e3b57aa5-707b-41f7-af28-34a33cb8e84e-kube-api-access-4dbll\") pod \"neutron-8656f88b4-fzcg6\" (UID: \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\") " pod="openstack/neutron-8656f88b4-fzcg6" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.341018 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e3b57aa5-707b-41f7-af28-34a33cb8e84e-httpd-config\") pod \"neutron-8656f88b4-fzcg6\" (UID: \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\") " pod="openstack/neutron-8656f88b4-fzcg6" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.341312 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3b57aa5-707b-41f7-af28-34a33cb8e84e-ovndb-tls-certs\") pod \"neutron-8656f88b4-fzcg6\" (UID: \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\") " pod="openstack/neutron-8656f88b4-fzcg6" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.349395 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e3b57aa5-707b-41f7-af28-34a33cb8e84e-httpd-config\") pod \"neutron-8656f88b4-fzcg6\" (UID: \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\") " pod="openstack/neutron-8656f88b4-fzcg6" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.351804 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3b57aa5-707b-41f7-af28-34a33cb8e84e-config\") pod \"neutron-8656f88b4-fzcg6\" (UID: \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\") " pod="openstack/neutron-8656f88b4-fzcg6" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.352298 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3b57aa5-707b-41f7-af28-34a33cb8e84e-ovndb-tls-certs\") pod \"neutron-8656f88b4-fzcg6\" (UID: \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\") " pod="openstack/neutron-8656f88b4-fzcg6" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.353086 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b57aa5-707b-41f7-af28-34a33cb8e84e-combined-ca-bundle\") pod \"neutron-8656f88b4-fzcg6\" (UID: \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\") " pod="openstack/neutron-8656f88b4-fzcg6" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.384923 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dbll\" (UniqueName: \"kubernetes.io/projected/e3b57aa5-707b-41f7-af28-34a33cb8e84e-kube-api-access-4dbll\") pod \"neutron-8656f88b4-fzcg6\" (UID: \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\") " pod="openstack/neutron-8656f88b4-fzcg6" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.588690 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8656f88b4-fzcg6" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.889492 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cbb478958-6t4v7"] Feb 23 10:25:33 crc kubenswrapper[4904]: W0223 10:25:33.928427 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3ed344c_65ef_4fcf_bf9a_e3e703c7e12b.slice/crio-0f2fc7ff46d42d357d90ff03672ea5cb089a738b8563477d8d26a5cea109a5eb WatchSource:0}: Error finding container 0f2fc7ff46d42d357d90ff03672ea5cb089a738b8563477d8d26a5cea109a5eb: Status 404 returned error can't find the container with id 0f2fc7ff46d42d357d90ff03672ea5cb089a738b8563477d8d26a5cea109a5eb Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.933882 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" event={"ID":"d84a0369-8741-4751-9659-66afc1b89c8c","Type":"ContainerStarted","Data":"848f5c59ad0490142ae94454135c5bff2164c359df722661e6c33347b0ef01ab"} Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.934135 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" podUID="d84a0369-8741-4751-9659-66afc1b89c8c" containerName="dnsmasq-dns" containerID="cri-o://848f5c59ad0490142ae94454135c5bff2164c359df722661e6c33347b0ef01ab" gracePeriod=10 Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.934614 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.939164 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56599cf886-x6z6x" event={"ID":"16c88a53-6a67-457c-9cce-5fd72203ca30","Type":"ContainerStarted","Data":"3861de26a4358bf4781849ee5ca536b4cd82a4e7fdab8927f9dd12117bacde93"} Feb 23 10:25:33 crc kubenswrapper[4904]: E0223 10:25:33.958367 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-8zkpf" podUID="25955027-6da1-4cce-8074-f079cf65f840" Feb 23 10:25:33 crc kubenswrapper[4904]: I0223 10:25:33.993606 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" podStartSLOduration=32.993566174 podStartE2EDuration="32.993566174s" podCreationTimestamp="2026-02-23 10:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:25:33.964834177 +0000 UTC m=+1167.385207690" watchObservedRunningTime="2026-02-23 10:25:33.993566174 +0000 UTC m=+1167.413939707" Feb 23 10:25:34 crc kubenswrapper[4904]: I0223 10:25:34.100244 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-72x5h"] Feb 23 10:25:34 crc kubenswrapper[4904]: I0223 10:25:34.279935 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 23 10:25:34 crc kubenswrapper[4904]: W0223 10:25:34.321908 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode67bcf22_c071_482b_8059_aac23cfb59ac.slice/crio-b8b7a6591aaed34c536dd2713a0661b07125fff71db642ce13ef7a5a90a6e6d7 WatchSource:0}: Error finding container b8b7a6591aaed34c536dd2713a0661b07125fff71db642ce13ef7a5a90a6e6d7: Status 404 returned error can't find the container with id b8b7a6591aaed34c536dd2713a0661b07125fff71db642ce13ef7a5a90a6e6d7 Feb 23 10:25:34 crc kubenswrapper[4904]: I0223 10:25:34.469341 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-gp5rq"] Feb 23 10:25:34 crc kubenswrapper[4904]: I0223 10:25:34.756268 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8656f88b4-fzcg6"] Feb 23 10:25:34 crc kubenswrapper[4904]: I0223 10:25:34.953228 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cbb478958-6t4v7" event={"ID":"e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b","Type":"ContainerStarted","Data":"8c91758e1744156a24b987e17af4d9b2547a5b5794788d7f0e76ea69c2fa2ba7"} Feb 23 10:25:34 crc kubenswrapper[4904]: I0223 10:25:34.953284 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cbb478958-6t4v7" event={"ID":"e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b","Type":"ContainerStarted","Data":"0f2fc7ff46d42d357d90ff03672ea5cb089a738b8563477d8d26a5cea109a5eb"} Feb 23 10:25:34 crc kubenswrapper[4904]: I0223 10:25:34.963374 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56599cf886-x6z6x" event={"ID":"16c88a53-6a67-457c-9cce-5fd72203ca30","Type":"ContainerStarted","Data":"33d0abadfeed8f191cd4e6b8bd96fedb35d9389fe8aa1a5686f8cd4c2edf0bf6"} Feb 23 10:25:34 crc kubenswrapper[4904]: I0223 10:25:34.970569 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0276b376-ee0e-45b3-bffc-feedf18e03a8","Type":"ContainerStarted","Data":"5ae941c1d1a6078505f7e94aadd27f9bd12abf5be8af9f18d25c0bd08d63491e"} Feb 23 10:25:34 crc kubenswrapper[4904]: I0223 10:25:34.970975 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0276b376-ee0e-45b3-bffc-feedf18e03a8" containerName="glance-log" containerID="cri-o://88dd4140ff3b91b42bc30462c7fd96dd12da627ba2e4e22a39e49ee879e16aa7" gracePeriod=30 Feb 23 10:25:34 crc kubenswrapper[4904]: I0223 10:25:34.971257 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0276b376-ee0e-45b3-bffc-feedf18e03a8" containerName="glance-httpd" containerID="cri-o://5ae941c1d1a6078505f7e94aadd27f9bd12abf5be8af9f18d25c0bd08d63491e" gracePeriod=30 Feb 23 10:25:34 crc kubenswrapper[4904]: I0223 10:25:34.974674 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"539ac286-6fae-4923-b100-f1cd8946c2c2","Type":"ContainerStarted","Data":"b1dfb5af29c04d61bf96636418bb1f8f5ecb431f5c4ae0d27af0d90dc125d3a7"} Feb 23 10:25:34 crc kubenswrapper[4904]: I0223 10:25:34.989944 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b","Type":"ContainerStarted","Data":"fae8eb94bcdbdf0672457341f0cd879b1fba9466624cfbe9adab25d29045e5e4"} Feb 23 10:25:34 crc kubenswrapper[4904]: I0223 10:25:34.990086 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="578f6f8e-6b44-4767-a6b0-6b10f0c3e94b" containerName="glance-httpd" containerID="cri-o://fae8eb94bcdbdf0672457341f0cd879b1fba9466624cfbe9adab25d29045e5e4" gracePeriod=30 Feb 23 10:25:34 crc kubenswrapper[4904]: I0223 10:25:34.990097 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="578f6f8e-6b44-4767-a6b0-6b10f0c3e94b" containerName="glance-log" containerID="cri-o://a2d3b6ff212114249dca97207510be2ad3a4daf3d4b3c1e80b541d62e0b1fb1a" gracePeriod=30 Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.004151 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-72x5h" event={"ID":"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a","Type":"ContainerStarted","Data":"3a122915ab33ebcbc090875a5df8f26098170409b94067d6da2036eeda73aded"} Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.005251 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=34.005226417 podStartE2EDuration="34.005226417s" podCreationTimestamp="2026-02-23 10:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:25:34.995598603 +0000 UTC m=+1168.415972106" watchObservedRunningTime="2026-02-23 10:25:35.005226417 +0000 UTC m=+1168.425599930" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.021297 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-544b9cc98f-nzzsf" event={"ID":"51427d17-4627-402f-a41d-987ab62579a5","Type":"ContainerStarted","Data":"3e0417c569d90b16c1eda9e930e34f06958062ae8a7a52167e41b9dceeebaef2"} Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.038690 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=34.038660407 podStartE2EDuration="34.038660407s" podCreationTimestamp="2026-02-23 10:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:25:35.030179976 +0000 UTC m=+1168.450553489" watchObservedRunningTime="2026-02-23 10:25:35.038660407 +0000 UTC m=+1168.459033910" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.041011 4904 generic.go:334] "Generic (PLEG): container finished" podID="d84a0369-8741-4751-9659-66afc1b89c8c" containerID="848f5c59ad0490142ae94454135c5bff2164c359df722661e6c33347b0ef01ab" exitCode=0 Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.041111 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" event={"ID":"d84a0369-8741-4751-9659-66afc1b89c8c","Type":"ContainerDied","Data":"848f5c59ad0490142ae94454135c5bff2164c359df722661e6c33347b0ef01ab"} Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.043682 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"e67bcf22-c071-482b-8059-aac23cfb59ac","Type":"ContainerStarted","Data":"b8b7a6591aaed34c536dd2713a0661b07125fff71db642ce13ef7a5a90a6e6d7"} Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.053409 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mcbg8" event={"ID":"0f42ae55-89bb-42a1-900c-9c332e089d96","Type":"ContainerStarted","Data":"9f495fb4de8b047af9ab23d411d271d8fe2175266c7fb7b185559aa0d219b577"} Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.057537 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"4a9517bd-8744-42d5-b058-6376f9294bfc","Type":"ContainerStarted","Data":"67df6264f814a89e36e55ee7a656b37b50d14343e1844ee9024746c48e47f393"} Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.082957 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-mcbg8" podStartSLOduration=7.210297188 podStartE2EDuration="34.082932875s" podCreationTimestamp="2026-02-23 10:25:01 +0000 UTC" firstStartedPulling="2026-02-23 10:25:04.569703979 +0000 UTC m=+1137.990077492" lastFinishedPulling="2026-02-23 10:25:31.442339666 +0000 UTC m=+1164.862713179" observedRunningTime="2026-02-23 10:25:35.074079614 +0000 UTC m=+1168.494453127" watchObservedRunningTime="2026-02-23 10:25:35.082932875 +0000 UTC m=+1168.503306388" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.083285 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=7.28426287 podStartE2EDuration="34.083279795s" podCreationTimestamp="2026-02-23 10:25:01 +0000 UTC" firstStartedPulling="2026-02-23 10:25:04.600586847 +0000 UTC m=+1138.020960360" lastFinishedPulling="2026-02-23 10:25:31.399603752 +0000 UTC m=+1164.819977285" observedRunningTime="2026-02-23 10:25:35.051546603 +0000 UTC m=+1168.471920146" watchObservedRunningTime="2026-02-23 10:25:35.083279795 +0000 UTC m=+1168.503653308" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.107445 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=15.875326548 podStartE2EDuration="34.107420711s" podCreationTimestamp="2026-02-23 10:25:01 +0000 UTC" firstStartedPulling="2026-02-23 10:25:04.052610613 +0000 UTC m=+1137.472984126" lastFinishedPulling="2026-02-23 10:25:22.284704766 +0000 UTC m=+1155.705078289" observedRunningTime="2026-02-23 10:25:35.096077189 +0000 UTC m=+1168.516450712" watchObservedRunningTime="2026-02-23 10:25:35.107420711 +0000 UTC m=+1168.527794224" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.479433 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d97dd4c5c-7dc7d"] Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.481558 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.484947 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.485113 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.529081 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d97dd4c5c-7dc7d"] Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.552573 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-httpd-config\") pod \"neutron-d97dd4c5c-7dc7d\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.553254 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z5fs\" (UniqueName: \"kubernetes.io/projected/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-kube-api-access-9z5fs\") pod \"neutron-d97dd4c5c-7dc7d\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.553292 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-ovndb-tls-certs\") pod \"neutron-d97dd4c5c-7dc7d\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.553322 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-public-tls-certs\") pod \"neutron-d97dd4c5c-7dc7d\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.553348 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-config\") pod \"neutron-d97dd4c5c-7dc7d\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.553407 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-combined-ca-bundle\") pod \"neutron-d97dd4c5c-7dc7d\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.553461 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-internal-tls-certs\") pod \"neutron-d97dd4c5c-7dc7d\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.656220 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-combined-ca-bundle\") pod \"neutron-d97dd4c5c-7dc7d\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.656305 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-internal-tls-certs\") pod \"neutron-d97dd4c5c-7dc7d\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.656356 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-httpd-config\") pod \"neutron-d97dd4c5c-7dc7d\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.656427 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z5fs\" (UniqueName: \"kubernetes.io/projected/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-kube-api-access-9z5fs\") pod \"neutron-d97dd4c5c-7dc7d\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.656452 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-ovndb-tls-certs\") pod \"neutron-d97dd4c5c-7dc7d\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.656473 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-public-tls-certs\") pod \"neutron-d97dd4c5c-7dc7d\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.656495 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-config\") pod \"neutron-d97dd4c5c-7dc7d\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.682135 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-httpd-config\") pod \"neutron-d97dd4c5c-7dc7d\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.686683 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-combined-ca-bundle\") pod \"neutron-d97dd4c5c-7dc7d\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.688294 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-internal-tls-certs\") pod \"neutron-d97dd4c5c-7dc7d\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.688504 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-ovndb-tls-certs\") pod \"neutron-d97dd4c5c-7dc7d\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.704994 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-public-tls-certs\") pod \"neutron-d97dd4c5c-7dc7d\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.725954 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z5fs\" (UniqueName: \"kubernetes.io/projected/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-kube-api-access-9z5fs\") pod \"neutron-d97dd4c5c-7dc7d\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.727318 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-config\") pod \"neutron-d97dd4c5c-7dc7d\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:25:35 crc kubenswrapper[4904]: E0223 10:25:35.785907 4904 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0276b376_ee0e_45b3_bffc_feedf18e03a8.slice/crio-conmon-5ae941c1d1a6078505f7e94aadd27f9bd12abf5be8af9f18d25c0bd08d63491e.scope\": RecentStats: unable to find data in memory cache]" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.845144 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.865982 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.996006 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5257\" (UniqueName: \"kubernetes.io/projected/d84a0369-8741-4751-9659-66afc1b89c8c-kube-api-access-x5257\") pod \"d84a0369-8741-4751-9659-66afc1b89c8c\" (UID: \"d84a0369-8741-4751-9659-66afc1b89c8c\") " Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.996577 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-dns-swift-storage-0\") pod \"d84a0369-8741-4751-9659-66afc1b89c8c\" (UID: \"d84a0369-8741-4751-9659-66afc1b89c8c\") " Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.996682 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-ovsdbserver-sb\") pod \"d84a0369-8741-4751-9659-66afc1b89c8c\" (UID: \"d84a0369-8741-4751-9659-66afc1b89c8c\") " Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.996744 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-config\") pod \"d84a0369-8741-4751-9659-66afc1b89c8c\" (UID: \"d84a0369-8741-4751-9659-66afc1b89c8c\") " Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.996837 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-dns-svc\") pod \"d84a0369-8741-4751-9659-66afc1b89c8c\" (UID: \"d84a0369-8741-4751-9659-66afc1b89c8c\") " Feb 23 10:25:35 crc kubenswrapper[4904]: I0223 10:25:35.997009 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-ovsdbserver-nb\") pod \"d84a0369-8741-4751-9659-66afc1b89c8c\" (UID: \"d84a0369-8741-4751-9659-66afc1b89c8c\") " Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.020924 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d84a0369-8741-4751-9659-66afc1b89c8c-kube-api-access-x5257" (OuterVolumeSpecName: "kube-api-access-x5257") pod "d84a0369-8741-4751-9659-66afc1b89c8c" (UID: "d84a0369-8741-4751-9659-66afc1b89c8c"). InnerVolumeSpecName "kube-api-access-x5257". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.106612 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5257\" (UniqueName: \"kubernetes.io/projected/d84a0369-8741-4751-9659-66afc1b89c8c-kube-api-access-x5257\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.114423 4904 generic.go:334] "Generic (PLEG): container finished" podID="0276b376-ee0e-45b3-bffc-feedf18e03a8" containerID="5ae941c1d1a6078505f7e94aadd27f9bd12abf5be8af9f18d25c0bd08d63491e" exitCode=0 Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.116086 4904 generic.go:334] "Generic (PLEG): container finished" podID="0276b376-ee0e-45b3-bffc-feedf18e03a8" containerID="88dd4140ff3b91b42bc30462c7fd96dd12da627ba2e4e22a39e49ee879e16aa7" exitCode=143 Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.116040 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0276b376-ee0e-45b3-bffc-feedf18e03a8","Type":"ContainerDied","Data":"5ae941c1d1a6078505f7e94aadd27f9bd12abf5be8af9f18d25c0bd08d63491e"} Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.116294 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0276b376-ee0e-45b3-bffc-feedf18e03a8","Type":"ContainerDied","Data":"88dd4140ff3b91b42bc30462c7fd96dd12da627ba2e4e22a39e49ee879e16aa7"} Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.127244 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8656f88b4-fzcg6" event={"ID":"e3b57aa5-707b-41f7-af28-34a33cb8e84e","Type":"ContainerStarted","Data":"79326774c33c324a998aab923fc9814a15814e24ac968614b4fa889a42e2f192"} Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.130018 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" event={"ID":"45fe4c1b-753c-434c-a67a-0022f6109980","Type":"ContainerStarted","Data":"28ed4a64f0c89e026faa2b614f18a975b88a29750ae5bdce3c556ea61aa43c27"} Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.141405 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" event={"ID":"d84a0369-8741-4751-9659-66afc1b89c8c","Type":"ContainerDied","Data":"e6583d21ad5dadeb8f011735eade025ab8f421090a7f4b4a50e38b1ed0ff030e"} Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.141486 4904 scope.go:117] "RemoveContainer" containerID="848f5c59ad0490142ae94454135c5bff2164c359df722661e6c33347b0ef01ab" Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.141672 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-wq2q2" Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.150416 4904 generic.go:334] "Generic (PLEG): container finished" podID="578f6f8e-6b44-4767-a6b0-6b10f0c3e94b" containerID="fae8eb94bcdbdf0672457341f0cd879b1fba9466624cfbe9adab25d29045e5e4" exitCode=143 Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.150662 4904 generic.go:334] "Generic (PLEG): container finished" podID="578f6f8e-6b44-4767-a6b0-6b10f0c3e94b" containerID="a2d3b6ff212114249dca97207510be2ad3a4daf3d4b3c1e80b541d62e0b1fb1a" exitCode=143 Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.150487 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b","Type":"ContainerDied","Data":"fae8eb94bcdbdf0672457341f0cd879b1fba9466624cfbe9adab25d29045e5e4"} Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.150866 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b","Type":"ContainerDied","Data":"a2d3b6ff212114249dca97207510be2ad3a4daf3d4b3c1e80b541d62e0b1fb1a"} Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.199137 4904 scope.go:117] "RemoveContainer" containerID="2490e8c8bb2857ee24b2233197ed59db7e2f54018c93e7dc3d25ec756e950fa2" Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.720527 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-config" (OuterVolumeSpecName: "config") pod "d84a0369-8741-4751-9659-66afc1b89c8c" (UID: "d84a0369-8741-4751-9659-66afc1b89c8c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.720556 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d84a0369-8741-4751-9659-66afc1b89c8c" (UID: "d84a0369-8741-4751-9659-66afc1b89c8c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.720564 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d84a0369-8741-4751-9659-66afc1b89c8c" (UID: "d84a0369-8741-4751-9659-66afc1b89c8c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.731704 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.731812 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.731828 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.745251 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d84a0369-8741-4751-9659-66afc1b89c8c" (UID: "d84a0369-8741-4751-9659-66afc1b89c8c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.745625 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d84a0369-8741-4751-9659-66afc1b89c8c" (UID: "d84a0369-8741-4751-9659-66afc1b89c8c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.834379 4904 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.834423 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d84a0369-8741-4751-9659-66afc1b89c8c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.911534 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d97dd4c5c-7dc7d"] Feb 23 10:25:36 crc kubenswrapper[4904]: I0223 10:25:36.944146 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.019847 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.134069 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.140503 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-combined-ca-bundle\") pod \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.140696 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-httpd-run\") pod \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.140780 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-scripts\") pod \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.140850 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.140876 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llzzs\" (UniqueName: \"kubernetes.io/projected/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-kube-api-access-llzzs\") pod \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.140958 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-logs\") pod \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.141006 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-config-data\") pod \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.141061 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-public-tls-certs\") pod \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\" (UID: \"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b\") " Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.141311 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "578f6f8e-6b44-4767-a6b0-6b10f0c3e94b" (UID: "578f6f8e-6b44-4767-a6b0-6b10f0c3e94b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.141808 4904 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.142526 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-logs" (OuterVolumeSpecName: "logs") pod "578f6f8e-6b44-4767-a6b0-6b10f0c3e94b" (UID: "578f6f8e-6b44-4767-a6b0-6b10f0c3e94b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.150050 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-scripts" (OuterVolumeSpecName: "scripts") pod "578f6f8e-6b44-4767-a6b0-6b10f0c3e94b" (UID: "578f6f8e-6b44-4767-a6b0-6b10f0c3e94b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.154806 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-kube-api-access-llzzs" (OuterVolumeSpecName: "kube-api-access-llzzs") pod "578f6f8e-6b44-4767-a6b0-6b10f0c3e94b" (UID: "578f6f8e-6b44-4767-a6b0-6b10f0c3e94b"). InnerVolumeSpecName "kube-api-access-llzzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.155068 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "578f6f8e-6b44-4767-a6b0-6b10f0c3e94b" (UID: "578f6f8e-6b44-4767-a6b0-6b10f0c3e94b"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.210323 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"e67bcf22-c071-482b-8059-aac23cfb59ac","Type":"ContainerStarted","Data":"c8a2455e46abea722142116ec238224772885fc7373307b9c1791dc4d863d6c4"} Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.214496 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cbb478958-6t4v7" event={"ID":"e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b","Type":"ContainerStarted","Data":"f87ceda0e84d7468f8cf4fde0b636899f5c33bbf5836b68b78e86b4e4dd38cbf"} Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.217510 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-72x5h" event={"ID":"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a","Type":"ContainerStarted","Data":"6ebc72530255bb9ec0fe9b9ad8987895b784ddbc5daf3ec34f18062d67b01d6a"} Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.222301 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-544b9cc98f-nzzsf" event={"ID":"51427d17-4627-402f-a41d-987ab62579a5","Type":"ContainerStarted","Data":"257f56e6f25ff62cb3502c2a87d3bda8e7b75c1e0b265bffb9cc147310ab1b0f"} Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.222583 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-544b9cc98f-nzzsf" podUID="51427d17-4627-402f-a41d-987ab62579a5" containerName="horizon-log" containerID="cri-o://3e0417c569d90b16c1eda9e930e34f06958062ae8a7a52167e41b9dceeebaef2" gracePeriod=30 Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.222782 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-544b9cc98f-nzzsf" podUID="51427d17-4627-402f-a41d-987ab62579a5" containerName="horizon" containerID="cri-o://257f56e6f25ff62cb3502c2a87d3bda8e7b75c1e0b265bffb9cc147310ab1b0f" gracePeriod=30 Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.265334 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"0276b376-ee0e-45b3-bffc-feedf18e03a8\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.265533 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0276b376-ee0e-45b3-bffc-feedf18e03a8-logs\") pod \"0276b376-ee0e-45b3-bffc-feedf18e03a8\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.265574 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr9mx\" (UniqueName: \"kubernetes.io/projected/0276b376-ee0e-45b3-bffc-feedf18e03a8-kube-api-access-rr9mx\") pod \"0276b376-ee0e-45b3-bffc-feedf18e03a8\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.265655 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0276b376-ee0e-45b3-bffc-feedf18e03a8-config-data\") pod \"0276b376-ee0e-45b3-bffc-feedf18e03a8\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.265749 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0276b376-ee0e-45b3-bffc-feedf18e03a8-httpd-run\") pod \"0276b376-ee0e-45b3-bffc-feedf18e03a8\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.265776 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0276b376-ee0e-45b3-bffc-feedf18e03a8-combined-ca-bundle\") pod \"0276b376-ee0e-45b3-bffc-feedf18e03a8\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.265847 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0276b376-ee0e-45b3-bffc-feedf18e03a8-internal-tls-certs\") pod \"0276b376-ee0e-45b3-bffc-feedf18e03a8\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.266021 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0276b376-ee0e-45b3-bffc-feedf18e03a8-scripts\") pod \"0276b376-ee0e-45b3-bffc-feedf18e03a8\" (UID: \"0276b376-ee0e-45b3-bffc-feedf18e03a8\") " Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.268559 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0276b376-ee0e-45b3-bffc-feedf18e03a8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0276b376-ee0e-45b3-bffc-feedf18e03a8" (UID: "0276b376-ee0e-45b3-bffc-feedf18e03a8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.268827 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0276b376-ee0e-45b3-bffc-feedf18e03a8-logs" (OuterVolumeSpecName: "logs") pod "0276b376-ee0e-45b3-bffc-feedf18e03a8" (UID: "0276b376-ee0e-45b3-bffc-feedf18e03a8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.269314 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.269366 4904 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.270471 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llzzs\" (UniqueName: \"kubernetes.io/projected/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-kube-api-access-llzzs\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.271463 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-logs\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.277089 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "0276b376-ee0e-45b3-bffc-feedf18e03a8" (UID: "0276b376-ee0e-45b3-bffc-feedf18e03a8"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.279651 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0276b376-ee0e-45b3-bffc-feedf18e03a8-kube-api-access-rr9mx" (OuterVolumeSpecName: "kube-api-access-rr9mx") pod "0276b376-ee0e-45b3-bffc-feedf18e03a8" (UID: "0276b376-ee0e-45b3-bffc-feedf18e03a8"). InnerVolumeSpecName "kube-api-access-rr9mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.293164 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.317334 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0276b376-ee0e-45b3-bffc-feedf18e03a8-scripts" (OuterVolumeSpecName: "scripts") pod "0276b376-ee0e-45b3-bffc-feedf18e03a8" (UID: "0276b376-ee0e-45b3-bffc-feedf18e03a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.326734 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7cbb478958-6t4v7" podStartSLOduration=26.326691104 podStartE2EDuration="26.326691104s" podCreationTimestamp="2026-02-23 10:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:25:37.243365306 +0000 UTC m=+1170.663738829" watchObservedRunningTime="2026-02-23 10:25:37.326691104 +0000 UTC m=+1170.747064617" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.339946 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.343761 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-72x5h" podStartSLOduration=22.343738239 podStartE2EDuration="22.343738239s" podCreationTimestamp="2026-02-23 10:25:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:25:37.263277932 +0000 UTC m=+1170.683651445" watchObservedRunningTime="2026-02-23 10:25:37.343738239 +0000 UTC m=+1170.764111752" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.347438 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0276b376-ee0e-45b3-bffc-feedf18e03a8","Type":"ContainerDied","Data":"2cde2f71c6e3f6616bfe47df74631a2d807bb6fd41950ae630f080f9f4df94ca"} Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.347654 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56599cf886-x6z6x" event={"ID":"16c88a53-6a67-457c-9cce-5fd72203ca30","Type":"ContainerStarted","Data":"e9caa45b5573448893f7c5e50da11c9e9c9b9b0d37fab7d7d8fa006cfc467548"} Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.347695 4904 scope.go:117] "RemoveContainer" containerID="5ae941c1d1a6078505f7e94aadd27f9bd12abf5be8af9f18d25c0bd08d63491e" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.347895 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-wq2q2"] Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.348200 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d97dd4c5c-7dc7d" event={"ID":"cb0deee4-46e1-4d7d-aba4-7b9483525f6f","Type":"ContainerStarted","Data":"4454e4b550d97809d3f3a2670b89dc881f1702389d358ca00b011a940f7aa767"} Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.348275 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-wq2q2"] Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.348346 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"578f6f8e-6b44-4767-a6b0-6b10f0c3e94b","Type":"ContainerDied","Data":"05d34705fd0e5ccbbf544f94e8f251ab6a93a1257b43ba5b579fb5de1df95c9d"} Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.370829 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-544b9cc98f-nzzsf" podStartSLOduration=7.984657379 podStartE2EDuration="33.370785928s" podCreationTimestamp="2026-02-23 10:25:04 +0000 UTC" firstStartedPulling="2026-02-23 10:25:07.747915447 +0000 UTC m=+1141.168288960" lastFinishedPulling="2026-02-23 10:25:33.134043996 +0000 UTC m=+1166.554417509" observedRunningTime="2026-02-23 10:25:37.288828808 +0000 UTC m=+1170.709202331" watchObservedRunningTime="2026-02-23 10:25:37.370785928 +0000 UTC m=+1170.791159441" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.374489 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0276b376-ee0e-45b3-bffc-feedf18e03a8-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.374549 4904 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.374562 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0276b376-ee0e-45b3-bffc-feedf18e03a8-logs\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.374572 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr9mx\" (UniqueName: \"kubernetes.io/projected/0276b376-ee0e-45b3-bffc-feedf18e03a8-kube-api-access-rr9mx\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.374584 4904 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0276b376-ee0e-45b3-bffc-feedf18e03a8-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.399188 4904 scope.go:117] "RemoveContainer" containerID="88dd4140ff3b91b42bc30462c7fd96dd12da627ba2e4e22a39e49ee879e16aa7" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.427047 4904 scope.go:117] "RemoveContainer" containerID="fae8eb94bcdbdf0672457341f0cd879b1fba9466624cfbe9adab25d29045e5e4" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.459231 4904 scope.go:117] "RemoveContainer" containerID="a2d3b6ff212114249dca97207510be2ad3a4daf3d4b3c1e80b541d62e0b1fb1a" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.522300 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-56599cf886-x6z6x" podStartSLOduration=26.522271583 podStartE2EDuration="26.522271583s" podCreationTimestamp="2026-02-23 10:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:25:37.506477684 +0000 UTC m=+1170.926851217" watchObservedRunningTime="2026-02-23 10:25:37.522271583 +0000 UTC m=+1170.942645096" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.574003 4904 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.577537 4904 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.582178 4904 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.582230 4904 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.635703 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "578f6f8e-6b44-4767-a6b0-6b10f0c3e94b" (UID: "578f6f8e-6b44-4767-a6b0-6b10f0c3e94b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.686008 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.742914 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0276b376-ee0e-45b3-bffc-feedf18e03a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0276b376-ee0e-45b3-bffc-feedf18e03a8" (UID: "0276b376-ee0e-45b3-bffc-feedf18e03a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.762691 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "578f6f8e-6b44-4767-a6b0-6b10f0c3e94b" (UID: "578f6f8e-6b44-4767-a6b0-6b10f0c3e94b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.777859 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-config-data" (OuterVolumeSpecName: "config-data") pod "578f6f8e-6b44-4767-a6b0-6b10f0c3e94b" (UID: "578f6f8e-6b44-4767-a6b0-6b10f0c3e94b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.788798 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.788861 4904 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.788879 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0276b376-ee0e-45b3-bffc-feedf18e03a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.796072 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0276b376-ee0e-45b3-bffc-feedf18e03a8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0276b376-ee0e-45b3-bffc-feedf18e03a8" (UID: "0276b376-ee0e-45b3-bffc-feedf18e03a8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.803116 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0276b376-ee0e-45b3-bffc-feedf18e03a8-config-data" (OuterVolumeSpecName: "config-data") pod "0276b376-ee0e-45b3-bffc-feedf18e03a8" (UID: "0276b376-ee0e-45b3-bffc-feedf18e03a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.891321 4904 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0276b376-ee0e-45b3-bffc-feedf18e03a8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:37 crc kubenswrapper[4904]: I0223 10:25:37.891370 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0276b376-ee0e-45b3-bffc-feedf18e03a8-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.105416 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.142537 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.178516 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.201534 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.201602 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 10:25:38 crc kubenswrapper[4904]: E0223 10:25:38.202168 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="578f6f8e-6b44-4767-a6b0-6b10f0c3e94b" containerName="glance-log" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.202193 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="578f6f8e-6b44-4767-a6b0-6b10f0c3e94b" containerName="glance-log" Feb 23 10:25:38 crc kubenswrapper[4904]: E0223 10:25:38.202214 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0276b376-ee0e-45b3-bffc-feedf18e03a8" containerName="glance-log" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.202223 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0276b376-ee0e-45b3-bffc-feedf18e03a8" containerName="glance-log" Feb 23 10:25:38 crc kubenswrapper[4904]: E0223 10:25:38.202241 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="578f6f8e-6b44-4767-a6b0-6b10f0c3e94b" containerName="glance-httpd" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.202250 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="578f6f8e-6b44-4767-a6b0-6b10f0c3e94b" containerName="glance-httpd" Feb 23 10:25:38 crc kubenswrapper[4904]: E0223 10:25:38.202259 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84a0369-8741-4751-9659-66afc1b89c8c" containerName="dnsmasq-dns" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.202266 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84a0369-8741-4751-9659-66afc1b89c8c" containerName="dnsmasq-dns" Feb 23 10:25:38 crc kubenswrapper[4904]: E0223 10:25:38.202281 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0276b376-ee0e-45b3-bffc-feedf18e03a8" containerName="glance-httpd" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.202288 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0276b376-ee0e-45b3-bffc-feedf18e03a8" containerName="glance-httpd" Feb 23 10:25:38 crc kubenswrapper[4904]: E0223 10:25:38.202297 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84a0369-8741-4751-9659-66afc1b89c8c" containerName="init" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.202303 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84a0369-8741-4751-9659-66afc1b89c8c" containerName="init" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.202561 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0276b376-ee0e-45b3-bffc-feedf18e03a8" containerName="glance-httpd" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.202577 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0276b376-ee0e-45b3-bffc-feedf18e03a8" containerName="glance-log" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.202587 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="578f6f8e-6b44-4767-a6b0-6b10f0c3e94b" containerName="glance-log" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.202611 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="d84a0369-8741-4751-9659-66afc1b89c8c" containerName="dnsmasq-dns" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.202622 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="578f6f8e-6b44-4767-a6b0-6b10f0c3e94b" containerName="glance-httpd" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.204524 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.211094 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.211210 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jjmnj" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.211266 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.214620 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.222823 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.224999 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.229412 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.229770 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.246027 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.306081 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.311849 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh475\" (UniqueName: \"kubernetes.io/projected/0819d4e1-7204-41b9-80c0-1b8e86fb211d-kube-api-access-hh475\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.311957 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0819d4e1-7204-41b9-80c0-1b8e86fb211d-logs\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.311996 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0819d4e1-7204-41b9-80c0-1b8e86fb211d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.312029 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.312050 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0819d4e1-7204-41b9-80c0-1b8e86fb211d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.312084 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0819d4e1-7204-41b9-80c0-1b8e86fb211d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.312129 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0819d4e1-7204-41b9-80c0-1b8e86fb211d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.312203 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0819d4e1-7204-41b9-80c0-1b8e86fb211d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.365905 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad","Type":"ContainerStarted","Data":"5622d2cefca58b2c6dab75a11d7b5ccea91386d592f09ff7a19a6c956bd5071f"} Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.370258 4904 generic.go:334] "Generic (PLEG): container finished" podID="45fe4c1b-753c-434c-a67a-0022f6109980" containerID="dc3c8f98978ecd6b2b362e11c3bf5c774c3ebeb9d349fb648509decdb4787df8" exitCode=0 Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.370437 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" event={"ID":"45fe4c1b-753c-434c-a67a-0022f6109980","Type":"ContainerDied","Data":"dc3c8f98978ecd6b2b362e11c3bf5c774c3ebeb9d349fb648509decdb4787df8"} Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.392861 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"e67bcf22-c071-482b-8059-aac23cfb59ac","Type":"ContainerStarted","Data":"e99b9faf10cfea2857f02f829288ea81d85fd13e41c543953c28e6dd8ee2511b"} Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.394567 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.398279 4904 generic.go:334] "Generic (PLEG): container finished" podID="0f42ae55-89bb-42a1-900c-9c332e089d96" containerID="9f495fb4de8b047af9ab23d411d271d8fe2175266c7fb7b185559aa0d219b577" exitCode=0 Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.398373 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mcbg8" event={"ID":"0f42ae55-89bb-42a1-900c-9c332e089d96","Type":"ContainerDied","Data":"9f495fb4de8b047af9ab23d411d271d8fe2175266c7fb7b185559aa0d219b577"} Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.407694 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d97dd4c5c-7dc7d" event={"ID":"cb0deee4-46e1-4d7d-aba4-7b9483525f6f","Type":"ContainerStarted","Data":"0dc465a92ee7151385a1192b02a565c1e08537f972d1c098d94d54a45856ccb3"} Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.407897 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.413606 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0819d4e1-7204-41b9-80c0-1b8e86fb211d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.413674 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/370d441a-a231-46cd-b528-1a80d8c593bc-scripts\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.413729 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4vkc\" (UniqueName: \"kubernetes.io/projected/370d441a-a231-46cd-b528-1a80d8c593bc-kube-api-access-l4vkc\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.413820 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh475\" (UniqueName: \"kubernetes.io/projected/0819d4e1-7204-41b9-80c0-1b8e86fb211d-kube-api-access-hh475\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.413839 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.413859 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/370d441a-a231-46cd-b528-1a80d8c593bc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.413880 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/370d441a-a231-46cd-b528-1a80d8c593bc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.413902 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/370d441a-a231-46cd-b528-1a80d8c593bc-config-data\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.413942 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0819d4e1-7204-41b9-80c0-1b8e86fb211d-logs\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.413958 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0819d4e1-7204-41b9-80c0-1b8e86fb211d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.413976 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.413991 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0819d4e1-7204-41b9-80c0-1b8e86fb211d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.414022 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0819d4e1-7204-41b9-80c0-1b8e86fb211d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.414042 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0819d4e1-7204-41b9-80c0-1b8e86fb211d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.414091 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/370d441a-a231-46cd-b528-1a80d8c593bc-logs\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.414127 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/370d441a-a231-46cd-b528-1a80d8c593bc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.415323 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.416382 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0819d4e1-7204-41b9-80c0-1b8e86fb211d-logs\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.423082 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0819d4e1-7204-41b9-80c0-1b8e86fb211d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.437549 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0819d4e1-7204-41b9-80c0-1b8e86fb211d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.442579 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0819d4e1-7204-41b9-80c0-1b8e86fb211d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.443033 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8656f88b4-fzcg6" event={"ID":"e3b57aa5-707b-41f7-af28-34a33cb8e84e","Type":"ContainerStarted","Data":"c21d2acd45dd7a1af73b2aa905dfb559e04cd7f63220ef45099a657fadfc5fb8"} Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.454187 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh475\" (UniqueName: \"kubernetes.io/projected/0819d4e1-7204-41b9-80c0-1b8e86fb211d-kube-api-access-hh475\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.479558 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0819d4e1-7204-41b9-80c0-1b8e86fb211d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.493557 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0819d4e1-7204-41b9-80c0-1b8e86fb211d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.498186 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.519309 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/370d441a-a231-46cd-b528-1a80d8c593bc-logs\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.519645 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/370d441a-a231-46cd-b528-1a80d8c593bc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.519778 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/370d441a-a231-46cd-b528-1a80d8c593bc-scripts\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.519870 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4vkc\" (UniqueName: \"kubernetes.io/projected/370d441a-a231-46cd-b528-1a80d8c593bc-kube-api-access-l4vkc\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.522166 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.522305 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/370d441a-a231-46cd-b528-1a80d8c593bc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.522412 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/370d441a-a231-46cd-b528-1a80d8c593bc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.522506 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/370d441a-a231-46cd-b528-1a80d8c593bc-config-data\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.527505 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/370d441a-a231-46cd-b528-1a80d8c593bc-logs\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.531306 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/370d441a-a231-46cd-b528-1a80d8c593bc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.532652 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.537941 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.542350 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/370d441a-a231-46cd-b528-1a80d8c593bc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.542815 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/370d441a-a231-46cd-b528-1a80d8c593bc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.546798 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/370d441a-a231-46cd-b528-1a80d8c593bc-scripts\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.546991 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=7.546956956 podStartE2EDuration="7.546956956s" podCreationTimestamp="2026-02-23 10:25:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:25:38.492461267 +0000 UTC m=+1171.912834810" watchObservedRunningTime="2026-02-23 10:25:38.546956956 +0000 UTC m=+1171.967330479" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.557293 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4vkc\" (UniqueName: \"kubernetes.io/projected/370d441a-a231-46cd-b528-1a80d8c593bc-kube-api-access-l4vkc\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.560626 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-d97dd4c5c-7dc7d" podStartSLOduration=3.560591943 podStartE2EDuration="3.560591943s" podCreationTimestamp="2026-02-23 10:25:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:25:38.530119947 +0000 UTC m=+1171.950493460" watchObservedRunningTime="2026-02-23 10:25:38.560591943 +0000 UTC m=+1171.980965456" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.565452 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/370d441a-a231-46cd-b528-1a80d8c593bc-config-data\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.617031 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " pod="openstack/glance-default-external-api-0" Feb 23 10:25:38 crc kubenswrapper[4904]: I0223 10:25:38.876227 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 10:25:39 crc kubenswrapper[4904]: I0223 10:25:39.123301 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 10:25:39 crc kubenswrapper[4904]: I0223 10:25:39.280252 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0276b376-ee0e-45b3-bffc-feedf18e03a8" path="/var/lib/kubelet/pods/0276b376-ee0e-45b3-bffc-feedf18e03a8/volumes" Feb 23 10:25:39 crc kubenswrapper[4904]: I0223 10:25:39.281697 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="578f6f8e-6b44-4767-a6b0-6b10f0c3e94b" path="/var/lib/kubelet/pods/578f6f8e-6b44-4767-a6b0-6b10f0c3e94b/volumes" Feb 23 10:25:39 crc kubenswrapper[4904]: I0223 10:25:39.284422 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d84a0369-8741-4751-9659-66afc1b89c8c" path="/var/lib/kubelet/pods/d84a0369-8741-4751-9659-66afc1b89c8c/volumes" Feb 23 10:25:39 crc kubenswrapper[4904]: I0223 10:25:39.481387 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" event={"ID":"45fe4c1b-753c-434c-a67a-0022f6109980","Type":"ContainerStarted","Data":"833e7b57035840f2075b0179b343bcb6037e6b37ca36ee4254140c0297eb3298"} Feb 23 10:25:39 crc kubenswrapper[4904]: I0223 10:25:39.483620 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" Feb 23 10:25:39 crc kubenswrapper[4904]: I0223 10:25:39.502559 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0819d4e1-7204-41b9-80c0-1b8e86fb211d","Type":"ContainerStarted","Data":"e702e90d8e5192a565dff24a08b78ad06058bda0c46781f08145b2c126932806"} Feb 23 10:25:39 crc kubenswrapper[4904]: I0223 10:25:39.531794 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d97dd4c5c-7dc7d" event={"ID":"cb0deee4-46e1-4d7d-aba4-7b9483525f6f","Type":"ContainerStarted","Data":"4e0ac65d4f3bceff878cc73d871f6abb1589f7eef8361fed9c31b3c0e597c5e3"} Feb 23 10:25:39 crc kubenswrapper[4904]: I0223 10:25:39.537571 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" podStartSLOduration=7.537547 podStartE2EDuration="7.537547s" podCreationTimestamp="2026-02-23 10:25:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:25:39.526154166 +0000 UTC m=+1172.946527679" watchObservedRunningTime="2026-02-23 10:25:39.537547 +0000 UTC m=+1172.957920513" Feb 23 10:25:39 crc kubenswrapper[4904]: I0223 10:25:39.539666 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8656f88b4-fzcg6" event={"ID":"e3b57aa5-707b-41f7-af28-34a33cb8e84e","Type":"ContainerStarted","Data":"6661758c00cc5a88a75673faa655d47686b45d837e898dc85caf3fb0bce6dd4a"} Feb 23 10:25:39 crc kubenswrapper[4904]: I0223 10:25:39.546789 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8656f88b4-fzcg6" Feb 23 10:25:39 crc kubenswrapper[4904]: I0223 10:25:39.584647 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8656f88b4-fzcg6" podStartSLOduration=6.584617588 podStartE2EDuration="6.584617588s" podCreationTimestamp="2026-02-23 10:25:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:25:39.584129754 +0000 UTC m=+1173.004503277" watchObservedRunningTime="2026-02-23 10:25:39.584617588 +0000 UTC m=+1173.004991101" Feb 23 10:25:39 crc kubenswrapper[4904]: I0223 10:25:39.721641 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 10:25:39 crc kubenswrapper[4904]: W0223 10:25:39.799014 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod370d441a_a231_46cd_b528_1a80d8c593bc.slice/crio-7ec7a6087ea6691d7c709a4f5ee9b21f38b0156fea07f3420e86f544e92affed WatchSource:0}: Error finding container 7ec7a6087ea6691d7c709a4f5ee9b21f38b0156fea07f3420e86f544e92affed: Status 404 returned error can't find the container with id 7ec7a6087ea6691d7c709a4f5ee9b21f38b0156fea07f3420e86f544e92affed Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.038586 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mcbg8" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.163344 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f42ae55-89bb-42a1-900c-9c332e089d96-combined-ca-bundle\") pod \"0f42ae55-89bb-42a1-900c-9c332e089d96\" (UID: \"0f42ae55-89bb-42a1-900c-9c332e089d96\") " Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.163609 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdqq2\" (UniqueName: \"kubernetes.io/projected/0f42ae55-89bb-42a1-900c-9c332e089d96-kube-api-access-jdqq2\") pod \"0f42ae55-89bb-42a1-900c-9c332e089d96\" (UID: \"0f42ae55-89bb-42a1-900c-9c332e089d96\") " Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.163764 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f42ae55-89bb-42a1-900c-9c332e089d96-logs\") pod \"0f42ae55-89bb-42a1-900c-9c332e089d96\" (UID: \"0f42ae55-89bb-42a1-900c-9c332e089d96\") " Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.163794 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f42ae55-89bb-42a1-900c-9c332e089d96-scripts\") pod \"0f42ae55-89bb-42a1-900c-9c332e089d96\" (UID: \"0f42ae55-89bb-42a1-900c-9c332e089d96\") " Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.163838 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f42ae55-89bb-42a1-900c-9c332e089d96-config-data\") pod \"0f42ae55-89bb-42a1-900c-9c332e089d96\" (UID: \"0f42ae55-89bb-42a1-900c-9c332e089d96\") " Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.164496 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f42ae55-89bb-42a1-900c-9c332e089d96-logs" (OuterVolumeSpecName: "logs") pod "0f42ae55-89bb-42a1-900c-9c332e089d96" (UID: "0f42ae55-89bb-42a1-900c-9c332e089d96"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.164851 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f42ae55-89bb-42a1-900c-9c332e089d96-logs\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.175836 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f42ae55-89bb-42a1-900c-9c332e089d96-scripts" (OuterVolumeSpecName: "scripts") pod "0f42ae55-89bb-42a1-900c-9c332e089d96" (UID: "0f42ae55-89bb-42a1-900c-9c332e089d96"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.198595 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f42ae55-89bb-42a1-900c-9c332e089d96-kube-api-access-jdqq2" (OuterVolumeSpecName: "kube-api-access-jdqq2") pod "0f42ae55-89bb-42a1-900c-9c332e089d96" (UID: "0f42ae55-89bb-42a1-900c-9c332e089d96"). InnerVolumeSpecName "kube-api-access-jdqq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.238439 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f42ae55-89bb-42a1-900c-9c332e089d96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f42ae55-89bb-42a1-900c-9c332e089d96" (UID: "0f42ae55-89bb-42a1-900c-9c332e089d96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.282802 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdqq2\" (UniqueName: \"kubernetes.io/projected/0f42ae55-89bb-42a1-900c-9c332e089d96-kube-api-access-jdqq2\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.282850 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f42ae55-89bb-42a1-900c-9c332e089d96-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.282864 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f42ae55-89bb-42a1-900c-9c332e089d96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.291515 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f42ae55-89bb-42a1-900c-9c332e089d96-config-data" (OuterVolumeSpecName: "config-data") pod "0f42ae55-89bb-42a1-900c-9c332e089d96" (UID: "0f42ae55-89bb-42a1-900c-9c332e089d96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.384861 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f42ae55-89bb-42a1-900c-9c332e089d96-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.578438 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f8c7c9fd4-2449r"] Feb 23 10:25:40 crc kubenswrapper[4904]: E0223 10:25:40.579206 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f42ae55-89bb-42a1-900c-9c332e089d96" containerName="placement-db-sync" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.579221 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f42ae55-89bb-42a1-900c-9c332e089d96" containerName="placement-db-sync" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.579417 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f42ae55-89bb-42a1-900c-9c332e089d96" containerName="placement-db-sync" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.580328 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mcbg8" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.583956 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mcbg8" event={"ID":"0f42ae55-89bb-42a1-900c-9c332e089d96","Type":"ContainerDied","Data":"4a1e36736990b7b561ec6f9f6d09bdb12e69cf4aaf922b76475b262162258e08"} Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.584005 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a1e36736990b7b561ec6f9f6d09bdb12e69cf4aaf922b76475b262162258e08" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.584100 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.587839 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hw6rs" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.588122 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.588582 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"370d441a-a231-46cd-b528-1a80d8c593bc","Type":"ContainerStarted","Data":"7ec7a6087ea6691d7c709a4f5ee9b21f38b0156fea07f3420e86f544e92affed"} Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.589023 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.589154 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.589245 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.605360 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.606790 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0819d4e1-7204-41b9-80c0-1b8e86fb211d","Type":"ContainerStarted","Data":"25dd08c1c8d066cb437b1d6dbd7839292b3a914bd9976f76934fefff8f9410c0"} Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.610787 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f8c7c9fd4-2449r"] Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.695696 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-scripts\") pod \"placement-f8c7c9fd4-2449r\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.701863 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-config-data\") pod \"placement-f8c7c9fd4-2449r\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.701936 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1f8a283-bc3b-4cd4-ab91-244942a44e58-logs\") pod \"placement-f8c7c9fd4-2449r\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.701980 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9277l\" (UniqueName: \"kubernetes.io/projected/a1f8a283-bc3b-4cd4-ab91-244942a44e58-kube-api-access-9277l\") pod \"placement-f8c7c9fd4-2449r\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.709953 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-combined-ca-bundle\") pod \"placement-f8c7c9fd4-2449r\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.710168 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-internal-tls-certs\") pod \"placement-f8c7c9fd4-2449r\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.710332 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-public-tls-certs\") pod \"placement-f8c7c9fd4-2449r\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.813383 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-config-data\") pod \"placement-f8c7c9fd4-2449r\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.813888 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1f8a283-bc3b-4cd4-ab91-244942a44e58-logs\") pod \"placement-f8c7c9fd4-2449r\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.813921 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9277l\" (UniqueName: \"kubernetes.io/projected/a1f8a283-bc3b-4cd4-ab91-244942a44e58-kube-api-access-9277l\") pod \"placement-f8c7c9fd4-2449r\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.813958 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-combined-ca-bundle\") pod \"placement-f8c7c9fd4-2449r\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.813992 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-internal-tls-certs\") pod \"placement-f8c7c9fd4-2449r\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.814021 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-public-tls-certs\") pod \"placement-f8c7c9fd4-2449r\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.814129 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-scripts\") pod \"placement-f8c7c9fd4-2449r\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.815595 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1f8a283-bc3b-4cd4-ab91-244942a44e58-logs\") pod \"placement-f8c7c9fd4-2449r\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.826453 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-scripts\") pod \"placement-f8c7c9fd4-2449r\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.827096 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-internal-tls-certs\") pod \"placement-f8c7c9fd4-2449r\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.836397 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-public-tls-certs\") pod \"placement-f8c7c9fd4-2449r\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.843062 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-combined-ca-bundle\") pod \"placement-f8c7c9fd4-2449r\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.848573 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-config-data\") pod \"placement-f8c7c9fd4-2449r\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.856317 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9277l\" (UniqueName: \"kubernetes.io/projected/a1f8a283-bc3b-4cd4-ab91-244942a44e58-kube-api-access-9277l\") pod \"placement-f8c7c9fd4-2449r\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:40 crc kubenswrapper[4904]: I0223 10:25:40.947598 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:41 crc kubenswrapper[4904]: I0223 10:25:41.298675 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 23 10:25:41 crc kubenswrapper[4904]: I0223 10:25:41.632765 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"370d441a-a231-46cd-b528-1a80d8c593bc","Type":"ContainerStarted","Data":"200469776c22b49cc82dd7c4a0d10e483f91f1691c5b73b75cc5db23de4b1d23"} Feb 23 10:25:41 crc kubenswrapper[4904]: I0223 10:25:41.700142 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:41 crc kubenswrapper[4904]: I0223 10:25:41.701976 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:25:41 crc kubenswrapper[4904]: I0223 10:25:41.708800 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f8c7c9fd4-2449r"] Feb 23 10:25:41 crc kubenswrapper[4904]: I0223 10:25:41.820917 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:41 crc kubenswrapper[4904]: I0223 10:25:41.822298 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:25:42 crc kubenswrapper[4904]: I0223 10:25:42.019854 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 23 10:25:42 crc kubenswrapper[4904]: I0223 10:25:42.090669 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 23 10:25:42 crc kubenswrapper[4904]: I0223 10:25:42.197802 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 23 10:25:42 crc kubenswrapper[4904]: I0223 10:25:42.232232 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 23 10:25:42 crc kubenswrapper[4904]: I0223 10:25:42.355282 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 23 10:25:42 crc kubenswrapper[4904]: I0223 10:25:42.355360 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 23 10:25:42 crc kubenswrapper[4904]: I0223 10:25:42.408335 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 23 10:25:42 crc kubenswrapper[4904]: I0223 10:25:42.691218 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0819d4e1-7204-41b9-80c0-1b8e86fb211d","Type":"ContainerStarted","Data":"7b9cb4add431cbc249a603e2bfa5389a0f4ba87ef27fb2bd3c811455cca5baf1"} Feb 23 10:25:42 crc kubenswrapper[4904]: I0223 10:25:42.702536 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"370d441a-a231-46cd-b528-1a80d8c593bc","Type":"ContainerStarted","Data":"2922040b7dd2ecaecc651fed679cf4ddac18ae6910f604fffabf2eb10a01f30a"} Feb 23 10:25:42 crc kubenswrapper[4904]: I0223 10:25:42.706034 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f8c7c9fd4-2449r" event={"ID":"a1f8a283-bc3b-4cd4-ab91-244942a44e58","Type":"ContainerStarted","Data":"f3338b2973d059c8719646d74e13e18b08a68e9836148c28b867fbefa7e910da"} Feb 23 10:25:42 crc kubenswrapper[4904]: I0223 10:25:42.706075 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f8c7c9fd4-2449r" event={"ID":"a1f8a283-bc3b-4cd4-ab91-244942a44e58","Type":"ContainerStarted","Data":"1430355315af8a0c13d57587710cdaa31b1d5f3bbec3c38bafed7b38f5776ef8"} Feb 23 10:25:42 crc kubenswrapper[4904]: I0223 10:25:42.716173 4904 generic.go:334] "Generic (PLEG): container finished" podID="6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a" containerID="6ebc72530255bb9ec0fe9b9ad8987895b784ddbc5daf3ec34f18062d67b01d6a" exitCode=0 Feb 23 10:25:42 crc kubenswrapper[4904]: I0223 10:25:42.718232 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-72x5h" event={"ID":"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a","Type":"ContainerDied","Data":"6ebc72530255bb9ec0fe9b9ad8987895b784ddbc5daf3ec34f18062d67b01d6a"} Feb 23 10:25:42 crc kubenswrapper[4904]: I0223 10:25:42.719315 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 23 10:25:42 crc kubenswrapper[4904]: I0223 10:25:42.736763 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.736743714 podStartE2EDuration="4.736743714s" podCreationTimestamp="2026-02-23 10:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:25:42.735154578 +0000 UTC m=+1176.155528091" watchObservedRunningTime="2026-02-23 10:25:42.736743714 +0000 UTC m=+1176.157117227" Feb 23 10:25:42 crc kubenswrapper[4904]: I0223 10:25:42.737611 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 23 10:25:42 crc kubenswrapper[4904]: I0223 10:25:42.796796 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.79677199 podStartE2EDuration="4.79677199s" podCreationTimestamp="2026-02-23 10:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:25:42.78026034 +0000 UTC m=+1176.200633863" watchObservedRunningTime="2026-02-23 10:25:42.79677199 +0000 UTC m=+1176.217145503" Feb 23 10:25:42 crc kubenswrapper[4904]: I0223 10:25:42.820858 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 23 10:25:42 crc kubenswrapper[4904]: I0223 10:25:42.823081 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 23 10:25:42 crc kubenswrapper[4904]: I0223 10:25:42.875799 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 23 10:25:42 crc kubenswrapper[4904]: I0223 10:25:42.970248 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 23 10:25:44 crc kubenswrapper[4904]: I0223 10:25:44.744187 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="539ac286-6fae-4923-b100-f1cd8946c2c2" containerName="watcher-decision-engine" containerID="cri-o://b1dfb5af29c04d61bf96636418bb1f8f5ecb431f5c4ae0d27af0d90dc125d3a7" gracePeriod=30 Feb 23 10:25:44 crc kubenswrapper[4904]: I0223 10:25:44.744926 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="4a9517bd-8744-42d5-b058-6376f9294bfc" containerName="watcher-applier" containerID="cri-o://67df6264f814a89e36e55ee7a656b37b50d14343e1844ee9024746c48e47f393" gracePeriod=30 Feb 23 10:25:45 crc kubenswrapper[4904]: I0223 10:25:45.157250 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-544b9cc98f-nzzsf" Feb 23 10:25:46 crc kubenswrapper[4904]: I0223 10:25:46.374660 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 23 10:25:46 crc kubenswrapper[4904]: I0223 10:25:46.375439 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="e67bcf22-c071-482b-8059-aac23cfb59ac" containerName="watcher-api-log" containerID="cri-o://c8a2455e46abea722142116ec238224772885fc7373307b9c1791dc4d863d6c4" gracePeriod=30 Feb 23 10:25:46 crc kubenswrapper[4904]: I0223 10:25:46.376004 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="e67bcf22-c071-482b-8059-aac23cfb59ac" containerName="watcher-api" containerID="cri-o://e99b9faf10cfea2857f02f829288ea81d85fd13e41c543953c28e6dd8ee2511b" gracePeriod=30 Feb 23 10:25:46 crc kubenswrapper[4904]: I0223 10:25:46.786253 4904 generic.go:334] "Generic (PLEG): container finished" podID="e67bcf22-c071-482b-8059-aac23cfb59ac" containerID="c8a2455e46abea722142116ec238224772885fc7373307b9c1791dc4d863d6c4" exitCode=143 Feb 23 10:25:46 crc kubenswrapper[4904]: I0223 10:25:46.786327 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"e67bcf22-c071-482b-8059-aac23cfb59ac","Type":"ContainerDied","Data":"c8a2455e46abea722142116ec238224772885fc7373307b9c1791dc4d863d6c4"} Feb 23 10:25:46 crc kubenswrapper[4904]: I0223 10:25:46.790221 4904 generic.go:334] "Generic (PLEG): container finished" podID="4a9517bd-8744-42d5-b058-6376f9294bfc" containerID="67df6264f814a89e36e55ee7a656b37b50d14343e1844ee9024746c48e47f393" exitCode=0 Feb 23 10:25:46 crc kubenswrapper[4904]: I0223 10:25:46.790306 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"4a9517bd-8744-42d5-b058-6376f9294bfc","Type":"ContainerDied","Data":"67df6264f814a89e36e55ee7a656b37b50d14343e1844ee9024746c48e47f393"} Feb 23 10:25:47 crc kubenswrapper[4904]: E0223 10:25:47.021165 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 67df6264f814a89e36e55ee7a656b37b50d14343e1844ee9024746c48e47f393 is running failed: container process not found" containerID="67df6264f814a89e36e55ee7a656b37b50d14343e1844ee9024746c48e47f393" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 23 10:25:47 crc kubenswrapper[4904]: E0223 10:25:47.022283 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 67df6264f814a89e36e55ee7a656b37b50d14343e1844ee9024746c48e47f393 is running failed: container process not found" containerID="67df6264f814a89e36e55ee7a656b37b50d14343e1844ee9024746c48e47f393" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 23 10:25:47 crc kubenswrapper[4904]: E0223 10:25:47.022743 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 67df6264f814a89e36e55ee7a656b37b50d14343e1844ee9024746c48e47f393 is running failed: container process not found" containerID="67df6264f814a89e36e55ee7a656b37b50d14343e1844ee9024746c48e47f393" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 23 10:25:47 crc kubenswrapper[4904]: E0223 10:25:47.022787 4904 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 67df6264f814a89e36e55ee7a656b37b50d14343e1844ee9024746c48e47f393 is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="4a9517bd-8744-42d5-b058-6376f9294bfc" containerName="watcher-applier" Feb 23 10:25:47 crc kubenswrapper[4904]: I0223 10:25:47.398326 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:25:47 crc kubenswrapper[4904]: I0223 10:25:47.399140 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:25:47 crc kubenswrapper[4904]: I0223 10:25:47.399219 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:25:47 crc kubenswrapper[4904]: I0223 10:25:47.400401 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6264b62be7a8acc04b5529c5a569156f7d3e2773a196aa33b2133b46c2a62f4"} pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 10:25:47 crc kubenswrapper[4904]: I0223 10:25:47.400592 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" containerID="cri-o://a6264b62be7a8acc04b5529c5a569156f7d3e2773a196aa33b2133b46c2a62f4" gracePeriod=600 Feb 23 10:25:47 crc kubenswrapper[4904]: I0223 10:25:47.812423 4904 generic.go:334] "Generic (PLEG): container finished" podID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerID="a6264b62be7a8acc04b5529c5a569156f7d3e2773a196aa33b2133b46c2a62f4" exitCode=0 Feb 23 10:25:47 crc kubenswrapper[4904]: I0223 10:25:47.812527 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerDied","Data":"a6264b62be7a8acc04b5529c5a569156f7d3e2773a196aa33b2133b46c2a62f4"} Feb 23 10:25:47 crc kubenswrapper[4904]: I0223 10:25:47.812603 4904 scope.go:117] "RemoveContainer" containerID="65119bff18e2a117e57ca57b960a2723a6ad4c2ce44063bd803ebeebee5b384d" Feb 23 10:25:48 crc kubenswrapper[4904]: I0223 10:25:48.329014 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" Feb 23 10:25:48 crc kubenswrapper[4904]: I0223 10:25:48.409691 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-g79dn"] Feb 23 10:25:48 crc kubenswrapper[4904]: I0223 10:25:48.410149 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" podUID="300121d9-ca54-432f-b210-a3bb1df0f8fc" containerName="dnsmasq-dns" containerID="cri-o://88a7988dd78e1bce6e164eff7425ecaed2b86ad3bc6521030442f27032dcaa2b" gracePeriod=10 Feb 23 10:25:48 crc kubenswrapper[4904]: I0223 10:25:48.538324 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 23 10:25:48 crc kubenswrapper[4904]: I0223 10:25:48.539172 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 23 10:25:48 crc kubenswrapper[4904]: I0223 10:25:48.593250 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 23 10:25:48 crc kubenswrapper[4904]: I0223 10:25:48.594206 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 23 10:25:48 crc kubenswrapper[4904]: I0223 10:25:48.829979 4904 generic.go:334] "Generic (PLEG): container finished" podID="539ac286-6fae-4923-b100-f1cd8946c2c2" containerID="b1dfb5af29c04d61bf96636418bb1f8f5ecb431f5c4ae0d27af0d90dc125d3a7" exitCode=0 Feb 23 10:25:48 crc kubenswrapper[4904]: I0223 10:25:48.830068 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"539ac286-6fae-4923-b100-f1cd8946c2c2","Type":"ContainerDied","Data":"b1dfb5af29c04d61bf96636418bb1f8f5ecb431f5c4ae0d27af0d90dc125d3a7"} Feb 23 10:25:48 crc kubenswrapper[4904]: I0223 10:25:48.835645 4904 generic.go:334] "Generic (PLEG): container finished" podID="300121d9-ca54-432f-b210-a3bb1df0f8fc" containerID="88a7988dd78e1bce6e164eff7425ecaed2b86ad3bc6521030442f27032dcaa2b" exitCode=0 Feb 23 10:25:48 crc kubenswrapper[4904]: I0223 10:25:48.835729 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" event={"ID":"300121d9-ca54-432f-b210-a3bb1df0f8fc","Type":"ContainerDied","Data":"88a7988dd78e1bce6e164eff7425ecaed2b86ad3bc6521030442f27032dcaa2b"} Feb 23 10:25:48 crc kubenswrapper[4904]: I0223 10:25:48.836305 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 23 10:25:48 crc kubenswrapper[4904]: I0223 10:25:48.836381 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 23 10:25:48 crc kubenswrapper[4904]: I0223 10:25:48.880398 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 23 10:25:48 crc kubenswrapper[4904]: I0223 10:25:48.880461 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 23 10:25:48 crc kubenswrapper[4904]: I0223 10:25:48.924362 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 23 10:25:48 crc kubenswrapper[4904]: I0223 10:25:48.941194 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.794431 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="e67bcf22-c071-482b-8059-aac23cfb59ac" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.165:9322/\": read tcp 10.217.0.2:52820->10.217.0.165:9322: read: connection reset by peer" Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.798153 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="e67bcf22-c071-482b-8059-aac23cfb59ac" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9322/\": read tcp 10.217.0.2:52814->10.217.0.165:9322: read: connection reset by peer" Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.812110 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-72x5h" Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.829894 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-fernet-keys\") pod \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\" (UID: \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\") " Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.830128 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-scripts\") pod \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\" (UID: \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\") " Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.830191 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-combined-ca-bundle\") pod \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\" (UID: \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\") " Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.830296 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-config-data\") pod \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\" (UID: \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\") " Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.830403 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhqff\" (UniqueName: \"kubernetes.io/projected/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-kube-api-access-lhqff\") pod \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\" (UID: \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\") " Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.830442 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-credential-keys\") pod \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\" (UID: \"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a\") " Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.847130 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a" (UID: "6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.857045 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-kube-api-access-lhqff" (OuterVolumeSpecName: "kube-api-access-lhqff") pod "6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a" (UID: "6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a"). InnerVolumeSpecName "kube-api-access-lhqff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.861905 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-scripts" (OuterVolumeSpecName: "scripts") pod "6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a" (UID: "6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.868009 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a" (UID: "6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.869299 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-72x5h" event={"ID":"6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a","Type":"ContainerDied","Data":"3a122915ab33ebcbc090875a5df8f26098170409b94067d6da2036eeda73aded"} Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.869374 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a122915ab33ebcbc090875a5df8f26098170409b94067d6da2036eeda73aded" Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.870543 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-72x5h" Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.880749 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.881062 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.905899 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a" (UID: "6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.923852 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-config-data" (OuterVolumeSpecName: "config-data") pod "6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a" (UID: "6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.948682 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.948756 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.948787 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhqff\" (UniqueName: \"kubernetes.io/projected/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-kube-api-access-lhqff\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.948811 4904 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.948837 4904 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:49 crc kubenswrapper[4904]: I0223 10:25:49.948865 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:50 crc kubenswrapper[4904]: I0223 10:25:50.525337 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" Feb 23 10:25:50 crc kubenswrapper[4904]: I0223 10:25:50.692471 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-dns-swift-storage-0\") pod \"300121d9-ca54-432f-b210-a3bb1df0f8fc\" (UID: \"300121d9-ca54-432f-b210-a3bb1df0f8fc\") " Feb 23 10:25:50 crc kubenswrapper[4904]: I0223 10:25:50.692578 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-config\") pod \"300121d9-ca54-432f-b210-a3bb1df0f8fc\" (UID: \"300121d9-ca54-432f-b210-a3bb1df0f8fc\") " Feb 23 10:25:50 crc kubenswrapper[4904]: I0223 10:25:50.697932 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-ovsdbserver-sb\") pod \"300121d9-ca54-432f-b210-a3bb1df0f8fc\" (UID: \"300121d9-ca54-432f-b210-a3bb1df0f8fc\") " Feb 23 10:25:50 crc kubenswrapper[4904]: I0223 10:25:50.698009 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxjp9\" (UniqueName: \"kubernetes.io/projected/300121d9-ca54-432f-b210-a3bb1df0f8fc-kube-api-access-bxjp9\") pod \"300121d9-ca54-432f-b210-a3bb1df0f8fc\" (UID: \"300121d9-ca54-432f-b210-a3bb1df0f8fc\") " Feb 23 10:25:50 crc kubenswrapper[4904]: I0223 10:25:50.698147 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-dns-svc\") pod \"300121d9-ca54-432f-b210-a3bb1df0f8fc\" (UID: \"300121d9-ca54-432f-b210-a3bb1df0f8fc\") " Feb 23 10:25:50 crc kubenswrapper[4904]: I0223 10:25:50.698360 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-ovsdbserver-nb\") pod \"300121d9-ca54-432f-b210-a3bb1df0f8fc\" (UID: \"300121d9-ca54-432f-b210-a3bb1df0f8fc\") " Feb 23 10:25:50 crc kubenswrapper[4904]: I0223 10:25:50.739166 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/300121d9-ca54-432f-b210-a3bb1df0f8fc-kube-api-access-bxjp9" (OuterVolumeSpecName: "kube-api-access-bxjp9") pod "300121d9-ca54-432f-b210-a3bb1df0f8fc" (UID: "300121d9-ca54-432f-b210-a3bb1df0f8fc"). InnerVolumeSpecName "kube-api-access-bxjp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:25:50 crc kubenswrapper[4904]: I0223 10:25:50.801770 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxjp9\" (UniqueName: \"kubernetes.io/projected/300121d9-ca54-432f-b210-a3bb1df0f8fc-kube-api-access-bxjp9\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:50 crc kubenswrapper[4904]: I0223 10:25:50.925773 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-config" (OuterVolumeSpecName: "config") pod "300121d9-ca54-432f-b210-a3bb1df0f8fc" (UID: "300121d9-ca54-432f-b210-a3bb1df0f8fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:25:50 crc kubenswrapper[4904]: I0223 10:25:50.928457 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "300121d9-ca54-432f-b210-a3bb1df0f8fc" (UID: "300121d9-ca54-432f-b210-a3bb1df0f8fc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:25:50 crc kubenswrapper[4904]: I0223 10:25:50.937568 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" event={"ID":"300121d9-ca54-432f-b210-a3bb1df0f8fc","Type":"ContainerDied","Data":"863618f7ecc955edc1f18b03be69d041ea49e354c8114132c81a8ad9d0bf211e"} Feb 23 10:25:50 crc kubenswrapper[4904]: I0223 10:25:50.937650 4904 scope.go:117] "RemoveContainer" containerID="88a7988dd78e1bce6e164eff7425ecaed2b86ad3bc6521030442f27032dcaa2b" Feb 23 10:25:50 crc kubenswrapper[4904]: I0223 10:25:50.937986 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-g79dn" Feb 23 10:25:50 crc kubenswrapper[4904]: I0223 10:25:50.943185 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "300121d9-ca54-432f-b210-a3bb1df0f8fc" (UID: "300121d9-ca54-432f-b210-a3bb1df0f8fc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:25:50 crc kubenswrapper[4904]: I0223 10:25:50.952171 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "300121d9-ca54-432f-b210-a3bb1df0f8fc" (UID: "300121d9-ca54-432f-b210-a3bb1df0f8fc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:25:50 crc kubenswrapper[4904]: I0223 10:25:50.958020 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f8c7c9fd4-2449r" event={"ID":"a1f8a283-bc3b-4cd4-ab91-244942a44e58","Type":"ContainerStarted","Data":"ebcb92569aba0fb0c56f61c9a6b39976ab055ee18b79386a345bd5d25bc94f4e"} Feb 23 10:25:50 crc kubenswrapper[4904]: I0223 10:25:50.960428 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:50 crc kubenswrapper[4904]: I0223 10:25:50.961245 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:50 crc kubenswrapper[4904]: I0223 10:25:50.980416 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "300121d9-ca54-432f-b210-a3bb1df0f8fc" (UID: "300121d9-ca54-432f-b210-a3bb1df0f8fc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.020048 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.020087 4904 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.020101 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.020110 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.020121 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/300121d9-ca54-432f-b210-a3bb1df0f8fc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.020394 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerStarted","Data":"239d9f69c1c753f6e98d8261e34261e4ea3e4b4d4d57f0a5fcbe49086812f15b"} Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.062154 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-764vx" event={"ID":"e89d0344-56a3-4c17-b647-5d69fc060406","Type":"ContainerStarted","Data":"07b264153c8230e4f64d0b18984c01b2465c471155f3cc715ab5ecfa5f5c8af9"} Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.046149 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-f8c7c9fd4-2449r" podStartSLOduration=11.046123714 podStartE2EDuration="11.046123714s" podCreationTimestamp="2026-02-23 10:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:25:51.018007775 +0000 UTC m=+1184.438381288" watchObservedRunningTime="2026-02-23 10:25:51.046123714 +0000 UTC m=+1184.466497227" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.089005 4904 generic.go:334] "Generic (PLEG): container finished" podID="e67bcf22-c071-482b-8059-aac23cfb59ac" containerID="e99b9faf10cfea2857f02f829288ea81d85fd13e41c543953c28e6dd8ee2511b" exitCode=0 Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.089095 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"e67bcf22-c071-482b-8059-aac23cfb59ac","Type":"ContainerDied","Data":"e99b9faf10cfea2857f02f829288ea81d85fd13e41c543953c28e6dd8ee2511b"} Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.093108 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"4a9517bd-8744-42d5-b058-6376f9294bfc","Type":"ContainerDied","Data":"0169deeeedb5845d029d701537cd6d907c911b3cee169fa235fcbd052e959ed8"} Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.093146 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0169deeeedb5845d029d701537cd6d907c911b3cee169fa235fcbd052e959ed8" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.097808 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-84d4456f94-cxsx9"] Feb 23 10:25:51 crc kubenswrapper[4904]: E0223 10:25:51.098526 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a" containerName="keystone-bootstrap" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.098544 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a" containerName="keystone-bootstrap" Feb 23 10:25:51 crc kubenswrapper[4904]: E0223 10:25:51.098563 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="300121d9-ca54-432f-b210-a3bb1df0f8fc" containerName="init" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.098569 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="300121d9-ca54-432f-b210-a3bb1df0f8fc" containerName="init" Feb 23 10:25:51 crc kubenswrapper[4904]: E0223 10:25:51.098580 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="300121d9-ca54-432f-b210-a3bb1df0f8fc" containerName="dnsmasq-dns" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.098587 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="300121d9-ca54-432f-b210-a3bb1df0f8fc" containerName="dnsmasq-dns" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.098870 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a" containerName="keystone-bootstrap" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.098895 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="300121d9-ca54-432f-b210-a3bb1df0f8fc" containerName="dnsmasq-dns" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.099952 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.105785 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.106019 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.106259 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.106408 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.111512 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"539ac286-6fae-4923-b100-f1cd8946c2c2","Type":"ContainerDied","Data":"99e2e6acfeb411137dc89a202d84c6c185395723173381b5ee0c3bd712ac5f60"} Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.111561 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99e2e6acfeb411137dc89a202d84c6c185395723173381b5ee0c3bd712ac5f60" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.112054 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.112286 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-c598d" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.125098 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-84d4456f94-cxsx9"] Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.132278 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-764vx" podStartSLOduration=3.929093284 podStartE2EDuration="49.132251312s" podCreationTimestamp="2026-02-23 10:25:02 +0000 UTC" firstStartedPulling="2026-02-23 10:25:04.921763295 +0000 UTC m=+1138.342136808" lastFinishedPulling="2026-02-23 10:25:50.124921323 +0000 UTC m=+1183.545294836" observedRunningTime="2026-02-23 10:25:51.121519917 +0000 UTC m=+1184.541893430" watchObservedRunningTime="2026-02-23 10:25:51.132251312 +0000 UTC m=+1184.552624825" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.138849 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad","Type":"ContainerStarted","Data":"6081ce68167e59f903787e2ef7815cd75e1e31249b818587c491f38a71a3b36e"} Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.152284 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.153265 4904 scope.go:117] "RemoveContainer" containerID="0a13063e194e3f15012fa973b2060a23eab37f45aec9db453ef4bbc8099110e9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.204516 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.225325 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5rb8\" (UniqueName: \"kubernetes.io/projected/87a8d0d0-2e01-4089-8a6c-722c46bd362b-kube-api-access-c5rb8\") pod \"keystone-84d4456f94-cxsx9\" (UID: \"87a8d0d0-2e01-4089-8a6c-722c46bd362b\") " pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.225406 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87a8d0d0-2e01-4089-8a6c-722c46bd362b-internal-tls-certs\") pod \"keystone-84d4456f94-cxsx9\" (UID: \"87a8d0d0-2e01-4089-8a6c-722c46bd362b\") " pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.225498 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87a8d0d0-2e01-4089-8a6c-722c46bd362b-public-tls-certs\") pod \"keystone-84d4456f94-cxsx9\" (UID: \"87a8d0d0-2e01-4089-8a6c-722c46bd362b\") " pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.225662 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87a8d0d0-2e01-4089-8a6c-722c46bd362b-fernet-keys\") pod \"keystone-84d4456f94-cxsx9\" (UID: \"87a8d0d0-2e01-4089-8a6c-722c46bd362b\") " pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.225732 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/87a8d0d0-2e01-4089-8a6c-722c46bd362b-credential-keys\") pod \"keystone-84d4456f94-cxsx9\" (UID: \"87a8d0d0-2e01-4089-8a6c-722c46bd362b\") " pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.225776 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a8d0d0-2e01-4089-8a6c-722c46bd362b-combined-ca-bundle\") pod \"keystone-84d4456f94-cxsx9\" (UID: \"87a8d0d0-2e01-4089-8a6c-722c46bd362b\") " pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.225833 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87a8d0d0-2e01-4089-8a6c-722c46bd362b-config-data\") pod \"keystone-84d4456f94-cxsx9\" (UID: \"87a8d0d0-2e01-4089-8a6c-722c46bd362b\") " pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.225861 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87a8d0d0-2e01-4089-8a6c-722c46bd362b-scripts\") pod \"keystone-84d4456f94-cxsx9\" (UID: \"87a8d0d0-2e01-4089-8a6c-722c46bd362b\") " pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.243708 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.329316 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67bcf22-c071-482b-8059-aac23cfb59ac-combined-ca-bundle\") pod \"e67bcf22-c071-482b-8059-aac23cfb59ac\" (UID: \"e67bcf22-c071-482b-8059-aac23cfb59ac\") " Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.329371 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67bcf22-c071-482b-8059-aac23cfb59ac-config-data\") pod \"e67bcf22-c071-482b-8059-aac23cfb59ac\" (UID: \"e67bcf22-c071-482b-8059-aac23cfb59ac\") " Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.329463 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/539ac286-6fae-4923-b100-f1cd8946c2c2-combined-ca-bundle\") pod \"539ac286-6fae-4923-b100-f1cd8946c2c2\" (UID: \"539ac286-6fae-4923-b100-f1cd8946c2c2\") " Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.329490 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/539ac286-6fae-4923-b100-f1cd8946c2c2-logs\") pod \"539ac286-6fae-4923-b100-f1cd8946c2c2\" (UID: \"539ac286-6fae-4923-b100-f1cd8946c2c2\") " Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.329521 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a9517bd-8744-42d5-b058-6376f9294bfc-config-data\") pod \"4a9517bd-8744-42d5-b058-6376f9294bfc\" (UID: \"4a9517bd-8744-42d5-b058-6376f9294bfc\") " Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.329563 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqhcp\" (UniqueName: \"kubernetes.io/projected/539ac286-6fae-4923-b100-f1cd8946c2c2-kube-api-access-hqhcp\") pod \"539ac286-6fae-4923-b100-f1cd8946c2c2\" (UID: \"539ac286-6fae-4923-b100-f1cd8946c2c2\") " Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.329655 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9knzj\" (UniqueName: \"kubernetes.io/projected/4a9517bd-8744-42d5-b058-6376f9294bfc-kube-api-access-9knzj\") pod \"4a9517bd-8744-42d5-b058-6376f9294bfc\" (UID: \"4a9517bd-8744-42d5-b058-6376f9294bfc\") " Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.331871 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/539ac286-6fae-4923-b100-f1cd8946c2c2-logs" (OuterVolumeSpecName: "logs") pod "539ac286-6fae-4923-b100-f1cd8946c2c2" (UID: "539ac286-6fae-4923-b100-f1cd8946c2c2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.347899 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/539ac286-6fae-4923-b100-f1cd8946c2c2-custom-prometheus-ca\") pod \"539ac286-6fae-4923-b100-f1cd8946c2c2\" (UID: \"539ac286-6fae-4923-b100-f1cd8946c2c2\") " Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.347992 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a9517bd-8744-42d5-b058-6376f9294bfc-logs\") pod \"4a9517bd-8744-42d5-b058-6376f9294bfc\" (UID: \"4a9517bd-8744-42d5-b058-6376f9294bfc\") " Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.348074 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jh9k\" (UniqueName: \"kubernetes.io/projected/e67bcf22-c071-482b-8059-aac23cfb59ac-kube-api-access-4jh9k\") pod \"e67bcf22-c071-482b-8059-aac23cfb59ac\" (UID: \"e67bcf22-c071-482b-8059-aac23cfb59ac\") " Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.348113 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e67bcf22-c071-482b-8059-aac23cfb59ac-custom-prometheus-ca\") pod \"e67bcf22-c071-482b-8059-aac23cfb59ac\" (UID: \"e67bcf22-c071-482b-8059-aac23cfb59ac\") " Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.348163 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/539ac286-6fae-4923-b100-f1cd8946c2c2-config-data\") pod \"539ac286-6fae-4923-b100-f1cd8946c2c2\" (UID: \"539ac286-6fae-4923-b100-f1cd8946c2c2\") " Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.348184 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e67bcf22-c071-482b-8059-aac23cfb59ac-logs\") pod \"e67bcf22-c071-482b-8059-aac23cfb59ac\" (UID: \"e67bcf22-c071-482b-8059-aac23cfb59ac\") " Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.348212 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9517bd-8744-42d5-b058-6376f9294bfc-combined-ca-bundle\") pod \"4a9517bd-8744-42d5-b058-6376f9294bfc\" (UID: \"4a9517bd-8744-42d5-b058-6376f9294bfc\") " Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.349573 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87a8d0d0-2e01-4089-8a6c-722c46bd362b-fernet-keys\") pod \"keystone-84d4456f94-cxsx9\" (UID: \"87a8d0d0-2e01-4089-8a6c-722c46bd362b\") " pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.349646 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/87a8d0d0-2e01-4089-8a6c-722c46bd362b-credential-keys\") pod \"keystone-84d4456f94-cxsx9\" (UID: \"87a8d0d0-2e01-4089-8a6c-722c46bd362b\") " pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.349723 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a8d0d0-2e01-4089-8a6c-722c46bd362b-combined-ca-bundle\") pod \"keystone-84d4456f94-cxsx9\" (UID: \"87a8d0d0-2e01-4089-8a6c-722c46bd362b\") " pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.349809 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87a8d0d0-2e01-4089-8a6c-722c46bd362b-config-data\") pod \"keystone-84d4456f94-cxsx9\" (UID: \"87a8d0d0-2e01-4089-8a6c-722c46bd362b\") " pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.349835 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87a8d0d0-2e01-4089-8a6c-722c46bd362b-scripts\") pod \"keystone-84d4456f94-cxsx9\" (UID: \"87a8d0d0-2e01-4089-8a6c-722c46bd362b\") " pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.349874 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5rb8\" (UniqueName: \"kubernetes.io/projected/87a8d0d0-2e01-4089-8a6c-722c46bd362b-kube-api-access-c5rb8\") pod \"keystone-84d4456f94-cxsx9\" (UID: \"87a8d0d0-2e01-4089-8a6c-722c46bd362b\") " pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.349972 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87a8d0d0-2e01-4089-8a6c-722c46bd362b-internal-tls-certs\") pod \"keystone-84d4456f94-cxsx9\" (UID: \"87a8d0d0-2e01-4089-8a6c-722c46bd362b\") " pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.350134 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87a8d0d0-2e01-4089-8a6c-722c46bd362b-public-tls-certs\") pod \"keystone-84d4456f94-cxsx9\" (UID: \"87a8d0d0-2e01-4089-8a6c-722c46bd362b\") " pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.350405 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/539ac286-6fae-4923-b100-f1cd8946c2c2-logs\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.351689 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a9517bd-8744-42d5-b058-6376f9294bfc-logs" (OuterVolumeSpecName: "logs") pod "4a9517bd-8744-42d5-b058-6376f9294bfc" (UID: "4a9517bd-8744-42d5-b058-6376f9294bfc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.360928 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/539ac286-6fae-4923-b100-f1cd8946c2c2-kube-api-access-hqhcp" (OuterVolumeSpecName: "kube-api-access-hqhcp") pod "539ac286-6fae-4923-b100-f1cd8946c2c2" (UID: "539ac286-6fae-4923-b100-f1cd8946c2c2"). InnerVolumeSpecName "kube-api-access-hqhcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.365272 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e67bcf22-c071-482b-8059-aac23cfb59ac-logs" (OuterVolumeSpecName: "logs") pod "e67bcf22-c071-482b-8059-aac23cfb59ac" (UID: "e67bcf22-c071-482b-8059-aac23cfb59ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.376977 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87a8d0d0-2e01-4089-8a6c-722c46bd362b-public-tls-certs\") pod \"keystone-84d4456f94-cxsx9\" (UID: \"87a8d0d0-2e01-4089-8a6c-722c46bd362b\") " pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.390479 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/87a8d0d0-2e01-4089-8a6c-722c46bd362b-credential-keys\") pod \"keystone-84d4456f94-cxsx9\" (UID: \"87a8d0d0-2e01-4089-8a6c-722c46bd362b\") " pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.391875 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e67bcf22-c071-482b-8059-aac23cfb59ac-kube-api-access-4jh9k" (OuterVolumeSpecName: "kube-api-access-4jh9k") pod "e67bcf22-c071-482b-8059-aac23cfb59ac" (UID: "e67bcf22-c071-482b-8059-aac23cfb59ac"). InnerVolumeSpecName "kube-api-access-4jh9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.392097 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87a8d0d0-2e01-4089-8a6c-722c46bd362b-config-data\") pod \"keystone-84d4456f94-cxsx9\" (UID: \"87a8d0d0-2e01-4089-8a6c-722c46bd362b\") " pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.392564 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a9517bd-8744-42d5-b058-6376f9294bfc-kube-api-access-9knzj" (OuterVolumeSpecName: "kube-api-access-9knzj") pod "4a9517bd-8744-42d5-b058-6376f9294bfc" (UID: "4a9517bd-8744-42d5-b058-6376f9294bfc"). InnerVolumeSpecName "kube-api-access-9knzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.393425 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a8d0d0-2e01-4089-8a6c-722c46bd362b-combined-ca-bundle\") pod \"keystone-84d4456f94-cxsx9\" (UID: \"87a8d0d0-2e01-4089-8a6c-722c46bd362b\") " pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.399126 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87a8d0d0-2e01-4089-8a6c-722c46bd362b-fernet-keys\") pod \"keystone-84d4456f94-cxsx9\" (UID: \"87a8d0d0-2e01-4089-8a6c-722c46bd362b\") " pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.402944 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87a8d0d0-2e01-4089-8a6c-722c46bd362b-scripts\") pod \"keystone-84d4456f94-cxsx9\" (UID: \"87a8d0d0-2e01-4089-8a6c-722c46bd362b\") " pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.416490 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87a8d0d0-2e01-4089-8a6c-722c46bd362b-internal-tls-certs\") pod \"keystone-84d4456f94-cxsx9\" (UID: \"87a8d0d0-2e01-4089-8a6c-722c46bd362b\") " pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.444776 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-g79dn"] Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.449428 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5rb8\" (UniqueName: \"kubernetes.io/projected/87a8d0d0-2e01-4089-8a6c-722c46bd362b-kube-api-access-c5rb8\") pod \"keystone-84d4456f94-cxsx9\" (UID: \"87a8d0d0-2e01-4089-8a6c-722c46bd362b\") " pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.453622 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqhcp\" (UniqueName: \"kubernetes.io/projected/539ac286-6fae-4923-b100-f1cd8946c2c2-kube-api-access-hqhcp\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.453655 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9knzj\" (UniqueName: \"kubernetes.io/projected/4a9517bd-8744-42d5-b058-6376f9294bfc-kube-api-access-9knzj\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.453667 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a9517bd-8744-42d5-b058-6376f9294bfc-logs\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.453680 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jh9k\" (UniqueName: \"kubernetes.io/projected/e67bcf22-c071-482b-8059-aac23cfb59ac-kube-api-access-4jh9k\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.453689 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e67bcf22-c071-482b-8059-aac23cfb59ac-logs\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.491428 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-g79dn"] Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.516903 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/539ac286-6fae-4923-b100-f1cd8946c2c2-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "539ac286-6fae-4923-b100-f1cd8946c2c2" (UID: "539ac286-6fae-4923-b100-f1cd8946c2c2"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.522271 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e67bcf22-c071-482b-8059-aac23cfb59ac-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "e67bcf22-c071-482b-8059-aac23cfb59ac" (UID: "e67bcf22-c071-482b-8059-aac23cfb59ac"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.549843 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.559580 4904 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/539ac286-6fae-4923-b100-f1cd8946c2c2-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.559610 4904 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e67bcf22-c071-482b-8059-aac23cfb59ac-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.590548 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e67bcf22-c071-482b-8059-aac23cfb59ac-config-data" (OuterVolumeSpecName: "config-data") pod "e67bcf22-c071-482b-8059-aac23cfb59ac" (UID: "e67bcf22-c071-482b-8059-aac23cfb59ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.590859 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e67bcf22-c071-482b-8059-aac23cfb59ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e67bcf22-c071-482b-8059-aac23cfb59ac" (UID: "e67bcf22-c071-482b-8059-aac23cfb59ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.604013 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a9517bd-8744-42d5-b058-6376f9294bfc-config-data" (OuterVolumeSpecName: "config-data") pod "4a9517bd-8744-42d5-b058-6376f9294bfc" (UID: "4a9517bd-8744-42d5-b058-6376f9294bfc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.653787 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/539ac286-6fae-4923-b100-f1cd8946c2c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "539ac286-6fae-4923-b100-f1cd8946c2c2" (UID: "539ac286-6fae-4923-b100-f1cd8946c2c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.662550 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67bcf22-c071-482b-8059-aac23cfb59ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.662608 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67bcf22-c071-482b-8059-aac23cfb59ac-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.662622 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/539ac286-6fae-4923-b100-f1cd8946c2c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.662632 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a9517bd-8744-42d5-b058-6376f9294bfc-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.712950 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56599cf886-x6z6x" podUID="16c88a53-6a67-457c-9cce-5fd72203ca30" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.162:8443: connect: connection refused" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.718965 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a9517bd-8744-42d5-b058-6376f9294bfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a9517bd-8744-42d5-b058-6376f9294bfc" (UID: "4a9517bd-8744-42d5-b058-6376f9294bfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.756663 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/539ac286-6fae-4923-b100-f1cd8946c2c2-config-data" (OuterVolumeSpecName: "config-data") pod "539ac286-6fae-4923-b100-f1cd8946c2c2" (UID: "539ac286-6fae-4923-b100-f1cd8946c2c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.770582 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/539ac286-6fae-4923-b100-f1cd8946c2c2-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.770664 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9517bd-8744-42d5-b058-6376f9294bfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:51 crc kubenswrapper[4904]: I0223 10:25:51.836098 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cbb478958-6t4v7" podUID="e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.200125 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.200780 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"e67bcf22-c071-482b-8059-aac23cfb59ac","Type":"ContainerDied","Data":"b8b7a6591aaed34c536dd2713a0661b07125fff71db642ce13ef7a5a90a6e6d7"} Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.200873 4904 scope.go:117] "RemoveContainer" containerID="e99b9faf10cfea2857f02f829288ea81d85fd13e41c543953c28e6dd8ee2511b" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.201097 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.204459 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.315137 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.350660 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.428223 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.439144 4904 scope.go:117] "RemoveContainer" containerID="c8a2455e46abea722142116ec238224772885fc7373307b9c1791dc4d863d6c4" Feb 23 10:25:52 crc kubenswrapper[4904]: E0223 10:25:52.473426 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a9517bd-8744-42d5-b058-6376f9294bfc" containerName="watcher-applier" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.473476 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a9517bd-8744-42d5-b058-6376f9294bfc" containerName="watcher-applier" Feb 23 10:25:52 crc kubenswrapper[4904]: E0223 10:25:52.473516 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e67bcf22-c071-482b-8059-aac23cfb59ac" containerName="watcher-api-log" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.473523 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e67bcf22-c071-482b-8059-aac23cfb59ac" containerName="watcher-api-log" Feb 23 10:25:52 crc kubenswrapper[4904]: E0223 10:25:52.473590 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="539ac286-6fae-4923-b100-f1cd8946c2c2" containerName="watcher-decision-engine" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.473597 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="539ac286-6fae-4923-b100-f1cd8946c2c2" containerName="watcher-decision-engine" Feb 23 10:25:52 crc kubenswrapper[4904]: E0223 10:25:52.473623 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e67bcf22-c071-482b-8059-aac23cfb59ac" containerName="watcher-api" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.473629 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e67bcf22-c071-482b-8059-aac23cfb59ac" containerName="watcher-api" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.474535 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="539ac286-6fae-4923-b100-f1cd8946c2c2" containerName="watcher-decision-engine" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.474575 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e67bcf22-c071-482b-8059-aac23cfb59ac" containerName="watcher-api-log" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.474589 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e67bcf22-c071-482b-8059-aac23cfb59ac" containerName="watcher-api" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.474620 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a9517bd-8744-42d5-b058-6376f9294bfc" containerName="watcher-applier" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.476337 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.496902 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.497215 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.497450 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-djcln" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.500048 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.502031 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61\") " pod="openstack/watcher-api-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.502089 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61-logs\") pod \"watcher-api-0\" (UID: \"67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61\") " pod="openstack/watcher-api-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.502124 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61\") " pod="openstack/watcher-api-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.502306 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rzgj\" (UniqueName: \"kubernetes.io/projected/67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61-kube-api-access-8rzgj\") pod \"watcher-api-0\" (UID: \"67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61\") " pod="openstack/watcher-api-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.502338 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61\") " pod="openstack/watcher-api-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.502360 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61-config-data\") pod \"watcher-api-0\" (UID: \"67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61\") " pod="openstack/watcher-api-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.502417 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61-public-tls-certs\") pod \"watcher-api-0\" (UID: \"67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61\") " pod="openstack/watcher-api-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.512747 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.551144 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-84d4456f94-cxsx9"] Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.615138 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61-public-tls-certs\") pod \"watcher-api-0\" (UID: \"67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61\") " pod="openstack/watcher-api-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.615277 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61\") " pod="openstack/watcher-api-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.615328 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61-logs\") pod \"watcher-api-0\" (UID: \"67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61\") " pod="openstack/watcher-api-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.615416 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61\") " pod="openstack/watcher-api-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.615677 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rzgj\" (UniqueName: \"kubernetes.io/projected/67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61-kube-api-access-8rzgj\") pod \"watcher-api-0\" (UID: \"67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61\") " pod="openstack/watcher-api-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.615750 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61\") " pod="openstack/watcher-api-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.615793 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61-config-data\") pod \"watcher-api-0\" (UID: \"67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61\") " pod="openstack/watcher-api-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.616851 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61-logs\") pod \"watcher-api-0\" (UID: \"67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61\") " pod="openstack/watcher-api-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.661682 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61\") " pod="openstack/watcher-api-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.661794 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rzgj\" (UniqueName: \"kubernetes.io/projected/67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61-kube-api-access-8rzgj\") pod \"watcher-api-0\" (UID: \"67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61\") " pod="openstack/watcher-api-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.663913 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61-config-data\") pod \"watcher-api-0\" (UID: \"67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61\") " pod="openstack/watcher-api-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.670320 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61-public-tls-certs\") pod \"watcher-api-0\" (UID: \"67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61\") " pod="openstack/watcher-api-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.672321 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61\") " pod="openstack/watcher-api-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.672805 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61\") " pod="openstack/watcher-api-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.977758 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 23 10:25:52 crc kubenswrapper[4904]: I0223 10:25:52.980184 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.015879 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.035079 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.036882 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.043201 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.055145 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.096214 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.112379 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.134180 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.140241 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.146512 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.179939 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.235569 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f53ac71c-9251-491f-8c96-da8a2b408b48-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"f53ac71c-9251-491f-8c96-da8a2b408b48\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.237181 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f53ac71c-9251-491f-8c96-da8a2b408b48-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"f53ac71c-9251-491f-8c96-da8a2b408b48\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.237397 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f53ac71c-9251-491f-8c96-da8a2b408b48-logs\") pod \"watcher-decision-engine-0\" (UID: \"f53ac71c-9251-491f-8c96-da8a2b408b48\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.237585 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f53ac71c-9251-491f-8c96-da8a2b408b48-config-data\") pod \"watcher-decision-engine-0\" (UID: \"f53ac71c-9251-491f-8c96-da8a2b408b48\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.237749 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8ccq\" (UniqueName: \"kubernetes.io/projected/f53ac71c-9251-491f-8c96-da8a2b408b48-kube-api-access-b8ccq\") pod \"watcher-decision-engine-0\" (UID: \"f53ac71c-9251-491f-8c96-da8a2b408b48\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.247238 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-84d4456f94-cxsx9" event={"ID":"87a8d0d0-2e01-4089-8a6c-722c46bd362b","Type":"ContainerStarted","Data":"741765b473813c55e28ed59bdaef25f49581acc04cbd1b94b3595db9e13446c0"} Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.255361 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.257358 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8zkpf" event={"ID":"25955027-6da1-4cce-8074-f079cf65f840","Type":"ContainerStarted","Data":"81e26ee0fa0c94165ab650892fdd472a4e230a32ad7e8da177b4f31df9544d28"} Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.318733 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="300121d9-ca54-432f-b210-a3bb1df0f8fc" path="/var/lib/kubelet/pods/300121d9-ca54-432f-b210-a3bb1df0f8fc/volumes" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.324359 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a9517bd-8744-42d5-b058-6376f9294bfc" path="/var/lib/kubelet/pods/4a9517bd-8744-42d5-b058-6376f9294bfc/volumes" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.325369 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="539ac286-6fae-4923-b100-f1cd8946c2c2" path="/var/lib/kubelet/pods/539ac286-6fae-4923-b100-f1cd8946c2c2/volumes" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.329963 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e67bcf22-c071-482b-8059-aac23cfb59ac" path="/var/lib/kubelet/pods/e67bcf22-c071-482b-8059-aac23cfb59ac/volumes" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.339432 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaeb9663-5c3b-4c64-becf-297691ff9f84-logs\") pod \"watcher-applier-0\" (UID: \"eaeb9663-5c3b-4c64-becf-297691ff9f84\") " pod="openstack/watcher-applier-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.339507 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaeb9663-5c3b-4c64-becf-297691ff9f84-config-data\") pod \"watcher-applier-0\" (UID: \"eaeb9663-5c3b-4c64-becf-297691ff9f84\") " pod="openstack/watcher-applier-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.339571 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaeb9663-5c3b-4c64-becf-297691ff9f84-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"eaeb9663-5c3b-4c64-becf-297691ff9f84\") " pod="openstack/watcher-applier-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.339648 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f53ac71c-9251-491f-8c96-da8a2b408b48-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"f53ac71c-9251-491f-8c96-da8a2b408b48\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.339707 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f53ac71c-9251-491f-8c96-da8a2b408b48-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"f53ac71c-9251-491f-8c96-da8a2b408b48\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.339750 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f53ac71c-9251-491f-8c96-da8a2b408b48-logs\") pod \"watcher-decision-engine-0\" (UID: \"f53ac71c-9251-491f-8c96-da8a2b408b48\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.339793 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f53ac71c-9251-491f-8c96-da8a2b408b48-config-data\") pod \"watcher-decision-engine-0\" (UID: \"f53ac71c-9251-491f-8c96-da8a2b408b48\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.339841 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hhj6\" (UniqueName: \"kubernetes.io/projected/eaeb9663-5c3b-4c64-becf-297691ff9f84-kube-api-access-9hhj6\") pod \"watcher-applier-0\" (UID: \"eaeb9663-5c3b-4c64-becf-297691ff9f84\") " pod="openstack/watcher-applier-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.339871 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8ccq\" (UniqueName: \"kubernetes.io/projected/f53ac71c-9251-491f-8c96-da8a2b408b48-kube-api-access-b8ccq\") pod \"watcher-decision-engine-0\" (UID: \"f53ac71c-9251-491f-8c96-da8a2b408b48\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.343792 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f53ac71c-9251-491f-8c96-da8a2b408b48-logs\") pod \"watcher-decision-engine-0\" (UID: \"f53ac71c-9251-491f-8c96-da8a2b408b48\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.362409 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8ccq\" (UniqueName: \"kubernetes.io/projected/f53ac71c-9251-491f-8c96-da8a2b408b48-kube-api-access-b8ccq\") pod \"watcher-decision-engine-0\" (UID: \"f53ac71c-9251-491f-8c96-da8a2b408b48\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.363558 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f53ac71c-9251-491f-8c96-da8a2b408b48-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"f53ac71c-9251-491f-8c96-da8a2b408b48\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.363576 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f53ac71c-9251-491f-8c96-da8a2b408b48-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"f53ac71c-9251-491f-8c96-da8a2b408b48\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.366288 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f53ac71c-9251-491f-8c96-da8a2b408b48-config-data\") pod \"watcher-decision-engine-0\" (UID: \"f53ac71c-9251-491f-8c96-da8a2b408b48\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.384975 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.465694 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hhj6\" (UniqueName: \"kubernetes.io/projected/eaeb9663-5c3b-4c64-becf-297691ff9f84-kube-api-access-9hhj6\") pod \"watcher-applier-0\" (UID: \"eaeb9663-5c3b-4c64-becf-297691ff9f84\") " pod="openstack/watcher-applier-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.465840 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaeb9663-5c3b-4c64-becf-297691ff9f84-logs\") pod \"watcher-applier-0\" (UID: \"eaeb9663-5c3b-4c64-becf-297691ff9f84\") " pod="openstack/watcher-applier-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.465938 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaeb9663-5c3b-4c64-becf-297691ff9f84-config-data\") pod \"watcher-applier-0\" (UID: \"eaeb9663-5c3b-4c64-becf-297691ff9f84\") " pod="openstack/watcher-applier-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.466115 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaeb9663-5c3b-4c64-becf-297691ff9f84-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"eaeb9663-5c3b-4c64-becf-297691ff9f84\") " pod="openstack/watcher-applier-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.466553 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaeb9663-5c3b-4c64-becf-297691ff9f84-logs\") pod \"watcher-applier-0\" (UID: \"eaeb9663-5c3b-4c64-becf-297691ff9f84\") " pod="openstack/watcher-applier-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.481776 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaeb9663-5c3b-4c64-becf-297691ff9f84-config-data\") pod \"watcher-applier-0\" (UID: \"eaeb9663-5c3b-4c64-becf-297691ff9f84\") " pod="openstack/watcher-applier-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.490549 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaeb9663-5c3b-4c64-becf-297691ff9f84-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"eaeb9663-5c3b-4c64-becf-297691ff9f84\") " pod="openstack/watcher-applier-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.499371 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hhj6\" (UniqueName: \"kubernetes.io/projected/eaeb9663-5c3b-4c64-becf-297691ff9f84-kube-api-access-9hhj6\") pod \"watcher-applier-0\" (UID: \"eaeb9663-5c3b-4c64-becf-297691ff9f84\") " pod="openstack/watcher-applier-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.566760 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.684392 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-8zkpf" podStartSLOduration=7.149963274 podStartE2EDuration="52.684362845s" podCreationTimestamp="2026-02-23 10:25:01 +0000 UTC" firstStartedPulling="2026-02-23 10:25:04.593012912 +0000 UTC m=+1138.013386425" lastFinishedPulling="2026-02-23 10:25:50.127412483 +0000 UTC m=+1183.547785996" observedRunningTime="2026-02-23 10:25:53.293703082 +0000 UTC m=+1186.714076605" watchObservedRunningTime="2026-02-23 10:25:53.684362845 +0000 UTC m=+1187.104736358" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.698783 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 23 10:25:53 crc kubenswrapper[4904]: I0223 10:25:53.830912 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 23 10:25:54 crc kubenswrapper[4904]: I0223 10:25:54.164598 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 23 10:25:54 crc kubenswrapper[4904]: W0223 10:25:54.184112 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf53ac71c_9251_491f_8c96_da8a2b408b48.slice/crio-f6b16e72482b41fdc3c8c23c9715b57d9e0bfcf7aa9e90a5694a3b7f6c78d487 WatchSource:0}: Error finding container f6b16e72482b41fdc3c8c23c9715b57d9e0bfcf7aa9e90a5694a3b7f6c78d487: Status 404 returned error can't find the container with id f6b16e72482b41fdc3c8c23c9715b57d9e0bfcf7aa9e90a5694a3b7f6c78d487 Feb 23 10:25:54 crc kubenswrapper[4904]: I0223 10:25:54.325479 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-84d4456f94-cxsx9" event={"ID":"87a8d0d0-2e01-4089-8a6c-722c46bd362b","Type":"ContainerStarted","Data":"9113e01aa493cfae77768e93c4fa75ed267373b4ff26ec94590b97d85fa6d268"} Feb 23 10:25:54 crc kubenswrapper[4904]: I0223 10:25:54.326908 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:25:54 crc kubenswrapper[4904]: I0223 10:25:54.342260 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61","Type":"ContainerStarted","Data":"c8cf5ab284ec0b3ee133da20980019df97832e0f689df7c5c5ebd7e68518baa7"} Feb 23 10:25:54 crc kubenswrapper[4904]: I0223 10:25:54.342320 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61","Type":"ContainerStarted","Data":"869fc85436aa2e0212788d3906c63a7b15497ed51a54c050a5469d56a3b482a2"} Feb 23 10:25:54 crc kubenswrapper[4904]: I0223 10:25:54.368507 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"f53ac71c-9251-491f-8c96-da8a2b408b48","Type":"ContainerStarted","Data":"f6b16e72482b41fdc3c8c23c9715b57d9e0bfcf7aa9e90a5694a3b7f6c78d487"} Feb 23 10:25:54 crc kubenswrapper[4904]: I0223 10:25:54.369803 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-84d4456f94-cxsx9" podStartSLOduration=3.369777295 podStartE2EDuration="3.369777295s" podCreationTimestamp="2026-02-23 10:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:25:54.36886829 +0000 UTC m=+1187.789241813" watchObservedRunningTime="2026-02-23 10:25:54.369777295 +0000 UTC m=+1187.790150808" Feb 23 10:25:54 crc kubenswrapper[4904]: I0223 10:25:54.510010 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 23 10:25:54 crc kubenswrapper[4904]: W0223 10:25:54.540442 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaeb9663_5c3b_4c64_becf_297691ff9f84.slice/crio-e15cec73cca9feca4bce400f3f2d48d8166809be6b20047c3da77e5f086bd5d4 WatchSource:0}: Error finding container e15cec73cca9feca4bce400f3f2d48d8166809be6b20047c3da77e5f086bd5d4: Status 404 returned error can't find the container with id e15cec73cca9feca4bce400f3f2d48d8166809be6b20047c3da77e5f086bd5d4 Feb 23 10:25:54 crc kubenswrapper[4904]: I0223 10:25:54.916673 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 23 10:25:54 crc kubenswrapper[4904]: I0223 10:25:54.916890 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 10:25:54 crc kubenswrapper[4904]: I0223 10:25:54.918398 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 23 10:25:55 crc kubenswrapper[4904]: I0223 10:25:55.035963 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 23 10:25:55 crc kubenswrapper[4904]: I0223 10:25:55.036418 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 10:25:55 crc kubenswrapper[4904]: I0223 10:25:55.098483 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 23 10:25:55 crc kubenswrapper[4904]: I0223 10:25:55.412349 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"eaeb9663-5c3b-4c64-becf-297691ff9f84","Type":"ContainerStarted","Data":"480b5067bec9f4ce88d76af3d6e0c89ab3daf87cc6caae1244a7d0f282f65cb5"} Feb 23 10:25:55 crc kubenswrapper[4904]: I0223 10:25:55.412404 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"eaeb9663-5c3b-4c64-becf-297691ff9f84","Type":"ContainerStarted","Data":"e15cec73cca9feca4bce400f3f2d48d8166809be6b20047c3da77e5f086bd5d4"} Feb 23 10:25:55 crc kubenswrapper[4904]: I0223 10:25:55.433665 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61","Type":"ContainerStarted","Data":"ec3777271d344988f724744e27c4b28dbe1ad9f6f9e21e215fd577bf249c1498"} Feb 23 10:25:55 crc kubenswrapper[4904]: I0223 10:25:55.433743 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 23 10:25:55 crc kubenswrapper[4904]: I0223 10:25:55.447942 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"f53ac71c-9251-491f-8c96-da8a2b408b48","Type":"ContainerStarted","Data":"95c7cf082496dd9eb45b4f0cc9a60cfbf0718c5f4d719cc1184ff68acaa582d4"} Feb 23 10:25:55 crc kubenswrapper[4904]: I0223 10:25:55.460106 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=2.460079552 podStartE2EDuration="2.460079552s" podCreationTimestamp="2026-02-23 10:25:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:25:55.441926986 +0000 UTC m=+1188.862300499" watchObservedRunningTime="2026-02-23 10:25:55.460079552 +0000 UTC m=+1188.880453055" Feb 23 10:25:55 crc kubenswrapper[4904]: I0223 10:25:55.494452 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.494418278 podStartE2EDuration="3.494418278s" podCreationTimestamp="2026-02-23 10:25:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:25:55.4892161 +0000 UTC m=+1188.909589613" watchObservedRunningTime="2026-02-23 10:25:55.494418278 +0000 UTC m=+1188.914791791" Feb 23 10:25:55 crc kubenswrapper[4904]: I0223 10:25:55.531025 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=3.530993648 podStartE2EDuration="3.530993648s" podCreationTimestamp="2026-02-23 10:25:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:25:55.512406259 +0000 UTC m=+1188.932779772" watchObservedRunningTime="2026-02-23 10:25:55.530993648 +0000 UTC m=+1188.951367161" Feb 23 10:25:56 crc kubenswrapper[4904]: I0223 10:25:56.480154 4904 generic.go:334] "Generic (PLEG): container finished" podID="e89d0344-56a3-4c17-b647-5d69fc060406" containerID="07b264153c8230e4f64d0b18984c01b2465c471155f3cc715ab5ecfa5f5c8af9" exitCode=0 Feb 23 10:25:56 crc kubenswrapper[4904]: I0223 10:25:56.480533 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-764vx" event={"ID":"e89d0344-56a3-4c17-b647-5d69fc060406","Type":"ContainerDied","Data":"07b264153c8230e4f64d0b18984c01b2465c471155f3cc715ab5ecfa5f5c8af9"} Feb 23 10:25:57 crc kubenswrapper[4904]: I0223 10:25:57.978736 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 23 10:25:57 crc kubenswrapper[4904]: I0223 10:25:57.979549 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.056538 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-764vx" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.105256 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.137812 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89d0344-56a3-4c17-b647-5d69fc060406-combined-ca-bundle\") pod \"e89d0344-56a3-4c17-b647-5d69fc060406\" (UID: \"e89d0344-56a3-4c17-b647-5d69fc060406\") " Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.137883 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e89d0344-56a3-4c17-b647-5d69fc060406-db-sync-config-data\") pod \"e89d0344-56a3-4c17-b647-5d69fc060406\" (UID: \"e89d0344-56a3-4c17-b647-5d69fc060406\") " Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.138202 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cn9k\" (UniqueName: \"kubernetes.io/projected/e89d0344-56a3-4c17-b647-5d69fc060406-kube-api-access-6cn9k\") pod \"e89d0344-56a3-4c17-b647-5d69fc060406\" (UID: \"e89d0344-56a3-4c17-b647-5d69fc060406\") " Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.156921 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89d0344-56a3-4c17-b647-5d69fc060406-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e89d0344-56a3-4c17-b647-5d69fc060406" (UID: "e89d0344-56a3-4c17-b647-5d69fc060406"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.165324 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e89d0344-56a3-4c17-b647-5d69fc060406-kube-api-access-6cn9k" (OuterVolumeSpecName: "kube-api-access-6cn9k") pod "e89d0344-56a3-4c17-b647-5d69fc060406" (UID: "e89d0344-56a3-4c17-b647-5d69fc060406"). InnerVolumeSpecName "kube-api-access-6cn9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.194493 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89d0344-56a3-4c17-b647-5d69fc060406-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e89d0344-56a3-4c17-b647-5d69fc060406" (UID: "e89d0344-56a3-4c17-b647-5d69fc060406"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.242149 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cn9k\" (UniqueName: \"kubernetes.io/projected/e89d0344-56a3-4c17-b647-5d69fc060406-kube-api-access-6cn9k\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.242191 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89d0344-56a3-4c17-b647-5d69fc060406-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.242201 4904 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e89d0344-56a3-4c17-b647-5d69fc060406-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.511114 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-764vx" event={"ID":"e89d0344-56a3-4c17-b647-5d69fc060406","Type":"ContainerDied","Data":"7d57c671e74868aeb01cb525b5a7afa8c5a059c867018383e8d242715b07f4b2"} Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.511169 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d57c671e74868aeb01cb525b5a7afa8c5a059c867018383e8d242715b07f4b2" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.511163 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-764vx" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.699423 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.773615 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7d8dc7bdcf-447fv"] Feb 23 10:25:58 crc kubenswrapper[4904]: E0223 10:25:58.774877 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e89d0344-56a3-4c17-b647-5d69fc060406" containerName="barbican-db-sync" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.774914 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e89d0344-56a3-4c17-b647-5d69fc060406" containerName="barbican-db-sync" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.775194 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e89d0344-56a3-4c17-b647-5d69fc060406" containerName="barbican-db-sync" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.776678 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d8dc7bdcf-447fv" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.785380 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.785481 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.785557 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5hmkf" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.806330 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7d8dc7bdcf-447fv"] Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.838829 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6955ddcccd-p7hv5"] Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.842135 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6955ddcccd-p7hv5" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.850292 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.857228 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd21ac44-8c6b-4db7-b5ed-76b0503419dc-config-data-custom\") pod \"barbican-worker-7d8dc7bdcf-447fv\" (UID: \"fd21ac44-8c6b-4db7-b5ed-76b0503419dc\") " pod="openstack/barbican-worker-7d8dc7bdcf-447fv" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.857320 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd21ac44-8c6b-4db7-b5ed-76b0503419dc-combined-ca-bundle\") pod \"barbican-worker-7d8dc7bdcf-447fv\" (UID: \"fd21ac44-8c6b-4db7-b5ed-76b0503419dc\") " pod="openstack/barbican-worker-7d8dc7bdcf-447fv" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.857365 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd21ac44-8c6b-4db7-b5ed-76b0503419dc-logs\") pod \"barbican-worker-7d8dc7bdcf-447fv\" (UID: \"fd21ac44-8c6b-4db7-b5ed-76b0503419dc\") " pod="openstack/barbican-worker-7d8dc7bdcf-447fv" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.857408 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45tkn\" (UniqueName: \"kubernetes.io/projected/fd21ac44-8c6b-4db7-b5ed-76b0503419dc-kube-api-access-45tkn\") pod \"barbican-worker-7d8dc7bdcf-447fv\" (UID: \"fd21ac44-8c6b-4db7-b5ed-76b0503419dc\") " pod="openstack/barbican-worker-7d8dc7bdcf-447fv" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.857522 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd21ac44-8c6b-4db7-b5ed-76b0503419dc-config-data\") pod \"barbican-worker-7d8dc7bdcf-447fv\" (UID: \"fd21ac44-8c6b-4db7-b5ed-76b0503419dc\") " pod="openstack/barbican-worker-7d8dc7bdcf-447fv" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.910841 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6955ddcccd-p7hv5"] Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.958353 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-6svhj"] Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.960447 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-6svhj" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.972270 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd21ac44-8c6b-4db7-b5ed-76b0503419dc-config-data-custom\") pod \"barbican-worker-7d8dc7bdcf-447fv\" (UID: \"fd21ac44-8c6b-4db7-b5ed-76b0503419dc\") " pod="openstack/barbican-worker-7d8dc7bdcf-447fv" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.972496 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21c928c2-98cf-48a2-b04b-e7520b36c73a-logs\") pod \"barbican-keystone-listener-6955ddcccd-p7hv5\" (UID: \"21c928c2-98cf-48a2-b04b-e7520b36c73a\") " pod="openstack/barbican-keystone-listener-6955ddcccd-p7hv5" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.972603 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd21ac44-8c6b-4db7-b5ed-76b0503419dc-combined-ca-bundle\") pod \"barbican-worker-7d8dc7bdcf-447fv\" (UID: \"fd21ac44-8c6b-4db7-b5ed-76b0503419dc\") " pod="openstack/barbican-worker-7d8dc7bdcf-447fv" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.972631 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21c928c2-98cf-48a2-b04b-e7520b36c73a-config-data\") pod \"barbican-keystone-listener-6955ddcccd-p7hv5\" (UID: \"21c928c2-98cf-48a2-b04b-e7520b36c73a\") " pod="openstack/barbican-keystone-listener-6955ddcccd-p7hv5" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.972697 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvh8f\" (UniqueName: \"kubernetes.io/projected/21c928c2-98cf-48a2-b04b-e7520b36c73a-kube-api-access-lvh8f\") pod \"barbican-keystone-listener-6955ddcccd-p7hv5\" (UID: \"21c928c2-98cf-48a2-b04b-e7520b36c73a\") " pod="openstack/barbican-keystone-listener-6955ddcccd-p7hv5" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.972779 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd21ac44-8c6b-4db7-b5ed-76b0503419dc-logs\") pod \"barbican-worker-7d8dc7bdcf-447fv\" (UID: \"fd21ac44-8c6b-4db7-b5ed-76b0503419dc\") " pod="openstack/barbican-worker-7d8dc7bdcf-447fv" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.972879 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21c928c2-98cf-48a2-b04b-e7520b36c73a-config-data-custom\") pod \"barbican-keystone-listener-6955ddcccd-p7hv5\" (UID: \"21c928c2-98cf-48a2-b04b-e7520b36c73a\") " pod="openstack/barbican-keystone-listener-6955ddcccd-p7hv5" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.972921 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45tkn\" (UniqueName: \"kubernetes.io/projected/fd21ac44-8c6b-4db7-b5ed-76b0503419dc-kube-api-access-45tkn\") pod \"barbican-worker-7d8dc7bdcf-447fv\" (UID: \"fd21ac44-8c6b-4db7-b5ed-76b0503419dc\") " pod="openstack/barbican-worker-7d8dc7bdcf-447fv" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.973123 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd21ac44-8c6b-4db7-b5ed-76b0503419dc-config-data\") pod \"barbican-worker-7d8dc7bdcf-447fv\" (UID: \"fd21ac44-8c6b-4db7-b5ed-76b0503419dc\") " pod="openstack/barbican-worker-7d8dc7bdcf-447fv" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.973156 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c928c2-98cf-48a2-b04b-e7520b36c73a-combined-ca-bundle\") pod \"barbican-keystone-listener-6955ddcccd-p7hv5\" (UID: \"21c928c2-98cf-48a2-b04b-e7520b36c73a\") " pod="openstack/barbican-keystone-listener-6955ddcccd-p7hv5" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.974028 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd21ac44-8c6b-4db7-b5ed-76b0503419dc-logs\") pod \"barbican-worker-7d8dc7bdcf-447fv\" (UID: \"fd21ac44-8c6b-4db7-b5ed-76b0503419dc\") " pod="openstack/barbican-worker-7d8dc7bdcf-447fv" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.985525 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-6svhj"] Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.990450 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd21ac44-8c6b-4db7-b5ed-76b0503419dc-config-data\") pod \"barbican-worker-7d8dc7bdcf-447fv\" (UID: \"fd21ac44-8c6b-4db7-b5ed-76b0503419dc\") " pod="openstack/barbican-worker-7d8dc7bdcf-447fv" Feb 23 10:25:58 crc kubenswrapper[4904]: I0223 10:25:58.990784 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd21ac44-8c6b-4db7-b5ed-76b0503419dc-combined-ca-bundle\") pod \"barbican-worker-7d8dc7bdcf-447fv\" (UID: \"fd21ac44-8c6b-4db7-b5ed-76b0503419dc\") " pod="openstack/barbican-worker-7d8dc7bdcf-447fv" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.003286 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd21ac44-8c6b-4db7-b5ed-76b0503419dc-config-data-custom\") pod \"barbican-worker-7d8dc7bdcf-447fv\" (UID: \"fd21ac44-8c6b-4db7-b5ed-76b0503419dc\") " pod="openstack/barbican-worker-7d8dc7bdcf-447fv" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.016074 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45tkn\" (UniqueName: \"kubernetes.io/projected/fd21ac44-8c6b-4db7-b5ed-76b0503419dc-kube-api-access-45tkn\") pod \"barbican-worker-7d8dc7bdcf-447fv\" (UID: \"fd21ac44-8c6b-4db7-b5ed-76b0503419dc\") " pod="openstack/barbican-worker-7d8dc7bdcf-447fv" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.078791 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21c928c2-98cf-48a2-b04b-e7520b36c73a-logs\") pod \"barbican-keystone-listener-6955ddcccd-p7hv5\" (UID: \"21c928c2-98cf-48a2-b04b-e7520b36c73a\") " pod="openstack/barbican-keystone-listener-6955ddcccd-p7hv5" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.078929 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21c928c2-98cf-48a2-b04b-e7520b36c73a-config-data\") pod \"barbican-keystone-listener-6955ddcccd-p7hv5\" (UID: \"21c928c2-98cf-48a2-b04b-e7520b36c73a\") " pod="openstack/barbican-keystone-listener-6955ddcccd-p7hv5" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.079006 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvh8f\" (UniqueName: \"kubernetes.io/projected/21c928c2-98cf-48a2-b04b-e7520b36c73a-kube-api-access-lvh8f\") pod \"barbican-keystone-listener-6955ddcccd-p7hv5\" (UID: \"21c928c2-98cf-48a2-b04b-e7520b36c73a\") " pod="openstack/barbican-keystone-listener-6955ddcccd-p7hv5" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.079159 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-6svhj\" (UID: \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\") " pod="openstack/dnsmasq-dns-85ff748b95-6svhj" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.079194 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21c928c2-98cf-48a2-b04b-e7520b36c73a-config-data-custom\") pod \"barbican-keystone-listener-6955ddcccd-p7hv5\" (UID: \"21c928c2-98cf-48a2-b04b-e7520b36c73a\") " pod="openstack/barbican-keystone-listener-6955ddcccd-p7hv5" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.079221 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-6svhj\" (UID: \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\") " pod="openstack/dnsmasq-dns-85ff748b95-6svhj" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.079439 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-dns-svc\") pod \"dnsmasq-dns-85ff748b95-6svhj\" (UID: \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\") " pod="openstack/dnsmasq-dns-85ff748b95-6svhj" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.079499 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-config\") pod \"dnsmasq-dns-85ff748b95-6svhj\" (UID: \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\") " pod="openstack/dnsmasq-dns-85ff748b95-6svhj" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.079588 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c928c2-98cf-48a2-b04b-e7520b36c73a-combined-ca-bundle\") pod \"barbican-keystone-listener-6955ddcccd-p7hv5\" (UID: \"21c928c2-98cf-48a2-b04b-e7520b36c73a\") " pod="openstack/barbican-keystone-listener-6955ddcccd-p7hv5" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.079632 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mglp5\" (UniqueName: \"kubernetes.io/projected/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-kube-api-access-mglp5\") pod \"dnsmasq-dns-85ff748b95-6svhj\" (UID: \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\") " pod="openstack/dnsmasq-dns-85ff748b95-6svhj" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.079663 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-6svhj\" (UID: \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\") " pod="openstack/dnsmasq-dns-85ff748b95-6svhj" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.093081 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21c928c2-98cf-48a2-b04b-e7520b36c73a-logs\") pod \"barbican-keystone-listener-6955ddcccd-p7hv5\" (UID: \"21c928c2-98cf-48a2-b04b-e7520b36c73a\") " pod="openstack/barbican-keystone-listener-6955ddcccd-p7hv5" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.112981 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-8f775cf4-7m6bg"] Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.115263 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8f775cf4-7m6bg" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.118358 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvh8f\" (UniqueName: \"kubernetes.io/projected/21c928c2-98cf-48a2-b04b-e7520b36c73a-kube-api-access-lvh8f\") pod \"barbican-keystone-listener-6955ddcccd-p7hv5\" (UID: \"21c928c2-98cf-48a2-b04b-e7520b36c73a\") " pod="openstack/barbican-keystone-listener-6955ddcccd-p7hv5" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.132589 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21c928c2-98cf-48a2-b04b-e7520b36c73a-config-data-custom\") pod \"barbican-keystone-listener-6955ddcccd-p7hv5\" (UID: \"21c928c2-98cf-48a2-b04b-e7520b36c73a\") " pod="openstack/barbican-keystone-listener-6955ddcccd-p7hv5" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.133482 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.120417 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21c928c2-98cf-48a2-b04b-e7520b36c73a-config-data\") pod \"barbican-keystone-listener-6955ddcccd-p7hv5\" (UID: \"21c928c2-98cf-48a2-b04b-e7520b36c73a\") " pod="openstack/barbican-keystone-listener-6955ddcccd-p7hv5" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.134839 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d8dc7bdcf-447fv" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.187510 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-6svhj\" (UID: \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\") " pod="openstack/dnsmasq-dns-85ff748b95-6svhj" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.187588 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-6svhj\" (UID: \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\") " pod="openstack/dnsmasq-dns-85ff748b95-6svhj" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.187657 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-dns-svc\") pod \"dnsmasq-dns-85ff748b95-6svhj\" (UID: \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\") " pod="openstack/dnsmasq-dns-85ff748b95-6svhj" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.187690 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-config\") pod \"dnsmasq-dns-85ff748b95-6svhj\" (UID: \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\") " pod="openstack/dnsmasq-dns-85ff748b95-6svhj" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.189937 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-6svhj\" (UID: \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\") " pod="openstack/dnsmasq-dns-85ff748b95-6svhj" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.195372 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-dns-svc\") pod \"dnsmasq-dns-85ff748b95-6svhj\" (UID: \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\") " pod="openstack/dnsmasq-dns-85ff748b95-6svhj" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.196978 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mglp5\" (UniqueName: \"kubernetes.io/projected/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-kube-api-access-mglp5\") pod \"dnsmasq-dns-85ff748b95-6svhj\" (UID: \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\") " pod="openstack/dnsmasq-dns-85ff748b95-6svhj" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.200695 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-config\") pod \"dnsmasq-dns-85ff748b95-6svhj\" (UID: \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\") " pod="openstack/dnsmasq-dns-85ff748b95-6svhj" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.200986 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c928c2-98cf-48a2-b04b-e7520b36c73a-combined-ca-bundle\") pod \"barbican-keystone-listener-6955ddcccd-p7hv5\" (UID: \"21c928c2-98cf-48a2-b04b-e7520b36c73a\") " pod="openstack/barbican-keystone-listener-6955ddcccd-p7hv5" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.201183 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-6svhj\" (UID: \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\") " pod="openstack/dnsmasq-dns-85ff748b95-6svhj" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.206249 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-6svhj\" (UID: \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\") " pod="openstack/dnsmasq-dns-85ff748b95-6svhj" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.218781 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-6svhj\" (UID: \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\") " pod="openstack/dnsmasq-dns-85ff748b95-6svhj" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.219961 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mglp5\" (UniqueName: \"kubernetes.io/projected/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-kube-api-access-mglp5\") pod \"dnsmasq-dns-85ff748b95-6svhj\" (UID: \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\") " pod="openstack/dnsmasq-dns-85ff748b95-6svhj" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.231822 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8f775cf4-7m6bg"] Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.283922 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-6svhj" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.306599 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-combined-ca-bundle\") pod \"barbican-api-8f775cf4-7m6bg\" (UID: \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\") " pod="openstack/barbican-api-8f775cf4-7m6bg" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.306809 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4bqv\" (UniqueName: \"kubernetes.io/projected/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-kube-api-access-k4bqv\") pod \"barbican-api-8f775cf4-7m6bg\" (UID: \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\") " pod="openstack/barbican-api-8f775cf4-7m6bg" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.306968 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-logs\") pod \"barbican-api-8f775cf4-7m6bg\" (UID: \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\") " pod="openstack/barbican-api-8f775cf4-7m6bg" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.307004 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-config-data\") pod \"barbican-api-8f775cf4-7m6bg\" (UID: \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\") " pod="openstack/barbican-api-8f775cf4-7m6bg" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.307055 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-config-data-custom\") pod \"barbican-api-8f775cf4-7m6bg\" (UID: \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\") " pod="openstack/barbican-api-8f775cf4-7m6bg" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.409352 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4bqv\" (UniqueName: \"kubernetes.io/projected/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-kube-api-access-k4bqv\") pod \"barbican-api-8f775cf4-7m6bg\" (UID: \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\") " pod="openstack/barbican-api-8f775cf4-7m6bg" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.409495 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-logs\") pod \"barbican-api-8f775cf4-7m6bg\" (UID: \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\") " pod="openstack/barbican-api-8f775cf4-7m6bg" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.409523 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-config-data\") pod \"barbican-api-8f775cf4-7m6bg\" (UID: \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\") " pod="openstack/barbican-api-8f775cf4-7m6bg" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.409564 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-config-data-custom\") pod \"barbican-api-8f775cf4-7m6bg\" (UID: \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\") " pod="openstack/barbican-api-8f775cf4-7m6bg" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.409642 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-combined-ca-bundle\") pod \"barbican-api-8f775cf4-7m6bg\" (UID: \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\") " pod="openstack/barbican-api-8f775cf4-7m6bg" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.415158 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-logs\") pod \"barbican-api-8f775cf4-7m6bg\" (UID: \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\") " pod="openstack/barbican-api-8f775cf4-7m6bg" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.418249 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-combined-ca-bundle\") pod \"barbican-api-8f775cf4-7m6bg\" (UID: \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\") " pod="openstack/barbican-api-8f775cf4-7m6bg" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.423456 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-config-data\") pod \"barbican-api-8f775cf4-7m6bg\" (UID: \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\") " pod="openstack/barbican-api-8f775cf4-7m6bg" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.429740 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-config-data-custom\") pod \"barbican-api-8f775cf4-7m6bg\" (UID: \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\") " pod="openstack/barbican-api-8f775cf4-7m6bg" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.434834 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4bqv\" (UniqueName: \"kubernetes.io/projected/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-kube-api-access-k4bqv\") pod \"barbican-api-8f775cf4-7m6bg\" (UID: \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\") " pod="openstack/barbican-api-8f775cf4-7m6bg" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.479661 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6955ddcccd-p7hv5" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.582224 4904 generic.go:334] "Generic (PLEG): container finished" podID="25955027-6da1-4cce-8074-f079cf65f840" containerID="81e26ee0fa0c94165ab650892fdd472a4e230a32ad7e8da177b4f31df9544d28" exitCode=0 Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.582287 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8zkpf" event={"ID":"25955027-6da1-4cce-8074-f079cf65f840","Type":"ContainerDied","Data":"81e26ee0fa0c94165ab650892fdd472a4e230a32ad7e8da177b4f31df9544d28"} Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.615278 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8f775cf4-7m6bg" Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.715975 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7d8dc7bdcf-447fv"] Feb 23 10:25:59 crc kubenswrapper[4904]: W0223 10:25:59.727896 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd21ac44_8c6b_4db7_b5ed_76b0503419dc.slice/crio-d924d2a2eb0d38bd28e9a25af7afaf71a35a2246595acb19a1020cdfbbc845f0 WatchSource:0}: Error finding container d924d2a2eb0d38bd28e9a25af7afaf71a35a2246595acb19a1020cdfbbc845f0: Status 404 returned error can't find the container with id d924d2a2eb0d38bd28e9a25af7afaf71a35a2246595acb19a1020cdfbbc845f0 Feb 23 10:25:59 crc kubenswrapper[4904]: I0223 10:25:59.925867 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-6svhj"] Feb 23 10:26:00 crc kubenswrapper[4904]: I0223 10:26:00.198896 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8f775cf4-7m6bg"] Feb 23 10:26:00 crc kubenswrapper[4904]: I0223 10:26:00.315577 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6955ddcccd-p7hv5"] Feb 23 10:26:00 crc kubenswrapper[4904]: I0223 10:26:00.611369 4904 generic.go:334] "Generic (PLEG): container finished" podID="f18bd80b-4e0b-4785-83c9-ede9b77d0a43" containerID="09115dd7e2f467c09b29d1d001329cab615110cca63c9b8459b0033a3c5243ad" exitCode=0 Feb 23 10:26:00 crc kubenswrapper[4904]: I0223 10:26:00.611591 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-6svhj" event={"ID":"f18bd80b-4e0b-4785-83c9-ede9b77d0a43","Type":"ContainerDied","Data":"09115dd7e2f467c09b29d1d001329cab615110cca63c9b8459b0033a3c5243ad"} Feb 23 10:26:00 crc kubenswrapper[4904]: I0223 10:26:00.611838 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-6svhj" event={"ID":"f18bd80b-4e0b-4785-83c9-ede9b77d0a43","Type":"ContainerStarted","Data":"bd3979fe647fef15ce239357ae30c658decde44437fa1e0c759284b131249aff"} Feb 23 10:26:00 crc kubenswrapper[4904]: I0223 10:26:00.614938 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d8dc7bdcf-447fv" event={"ID":"fd21ac44-8c6b-4db7-b5ed-76b0503419dc","Type":"ContainerStarted","Data":"d924d2a2eb0d38bd28e9a25af7afaf71a35a2246595acb19a1020cdfbbc845f0"} Feb 23 10:26:01 crc kubenswrapper[4904]: I0223 10:26:01.701780 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56599cf886-x6z6x" podUID="16c88a53-6a67-457c-9cce-5fd72203ca30" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.162:8443: connect: connection refused" Feb 23 10:26:01 crc kubenswrapper[4904]: I0223 10:26:01.821383 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cbb478958-6t4v7" podUID="e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.538360 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-f6485bd78-lkn6x"] Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.540400 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.544289 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.544581 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.572580 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-f6485bd78-lkn6x"] Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.608586 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7-internal-tls-certs\") pod \"barbican-api-f6485bd78-lkn6x\" (UID: \"ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7\") " pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.608974 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7-public-tls-certs\") pod \"barbican-api-f6485bd78-lkn6x\" (UID: \"ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7\") " pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.610068 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7-config-data-custom\") pod \"barbican-api-f6485bd78-lkn6x\" (UID: \"ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7\") " pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.610301 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7-config-data\") pod \"barbican-api-f6485bd78-lkn6x\" (UID: \"ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7\") " pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.610509 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7-combined-ca-bundle\") pod \"barbican-api-f6485bd78-lkn6x\" (UID: \"ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7\") " pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.610549 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdbj7\" (UniqueName: \"kubernetes.io/projected/ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7-kube-api-access-bdbj7\") pod \"barbican-api-f6485bd78-lkn6x\" (UID: \"ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7\") " pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.610774 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7-logs\") pod \"barbican-api-f6485bd78-lkn6x\" (UID: \"ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7\") " pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.719637 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7-config-data\") pod \"barbican-api-f6485bd78-lkn6x\" (UID: \"ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7\") " pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.719987 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7-combined-ca-bundle\") pod \"barbican-api-f6485bd78-lkn6x\" (UID: \"ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7\") " pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.720026 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdbj7\" (UniqueName: \"kubernetes.io/projected/ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7-kube-api-access-bdbj7\") pod \"barbican-api-f6485bd78-lkn6x\" (UID: \"ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7\") " pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.720196 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7-logs\") pod \"barbican-api-f6485bd78-lkn6x\" (UID: \"ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7\") " pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.720412 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7-internal-tls-certs\") pod \"barbican-api-f6485bd78-lkn6x\" (UID: \"ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7\") " pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.720473 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7-public-tls-certs\") pod \"barbican-api-f6485bd78-lkn6x\" (UID: \"ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7\") " pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.720528 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7-config-data-custom\") pod \"barbican-api-f6485bd78-lkn6x\" (UID: \"ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7\") " pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.721666 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7-logs\") pod \"barbican-api-f6485bd78-lkn6x\" (UID: \"ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7\") " pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.741584 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7-public-tls-certs\") pod \"barbican-api-f6485bd78-lkn6x\" (UID: \"ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7\") " pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.742226 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7-combined-ca-bundle\") pod \"barbican-api-f6485bd78-lkn6x\" (UID: \"ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7\") " pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.742743 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7-config-data-custom\") pod \"barbican-api-f6485bd78-lkn6x\" (UID: \"ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7\") " pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.742957 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7-config-data\") pod \"barbican-api-f6485bd78-lkn6x\" (UID: \"ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7\") " pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.751500 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdbj7\" (UniqueName: \"kubernetes.io/projected/ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7-kube-api-access-bdbj7\") pod \"barbican-api-f6485bd78-lkn6x\" (UID: \"ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7\") " pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.752454 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7-internal-tls-certs\") pod \"barbican-api-f6485bd78-lkn6x\" (UID: \"ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7\") " pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.860371 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.978764 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 23 10:26:02 crc kubenswrapper[4904]: I0223 10:26:02.996546 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 23 10:26:03 crc kubenswrapper[4904]: I0223 10:26:03.385889 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 23 10:26:03 crc kubenswrapper[4904]: I0223 10:26:03.423368 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 23 10:26:03 crc kubenswrapper[4904]: I0223 10:26:03.620076 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8656f88b4-fzcg6" Feb 23 10:26:03 crc kubenswrapper[4904]: I0223 10:26:03.699454 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 23 10:26:03 crc kubenswrapper[4904]: I0223 10:26:03.704141 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 23 10:26:03 crc kubenswrapper[4904]: I0223 10:26:03.715396 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 23 10:26:03 crc kubenswrapper[4904]: I0223 10:26:03.765669 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 23 10:26:03 crc kubenswrapper[4904]: I0223 10:26:03.802551 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 23 10:26:03 crc kubenswrapper[4904]: I0223 10:26:03.948121 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d97dd4c5c-7dc7d"] Feb 23 10:26:03 crc kubenswrapper[4904]: I0223 10:26:03.948416 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-d97dd4c5c-7dc7d" podUID="cb0deee4-46e1-4d7d-aba4-7b9483525f6f" containerName="neutron-api" containerID="cri-o://0dc465a92ee7151385a1192b02a565c1e08537f972d1c098d94d54a45856ccb3" gracePeriod=30 Feb 23 10:26:03 crc kubenswrapper[4904]: I0223 10:26:03.949696 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-d97dd4c5c-7dc7d" podUID="cb0deee4-46e1-4d7d-aba4-7b9483525f6f" containerName="neutron-httpd" containerID="cri-o://4e0ac65d4f3bceff878cc73d871f6abb1589f7eef8361fed9c31b3c0e597c5e3" gracePeriod=30 Feb 23 10:26:03 crc kubenswrapper[4904]: I0223 10:26:03.986320 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-d97dd4c5c-7dc7d" podUID="cb0deee4-46e1-4d7d-aba4-7b9483525f6f" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.168:9696/\": EOF" Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.050101 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5c456b7c45-bb96t"] Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.052257 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c456b7c45-bb96t"] Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.052354 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.162486 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0-internal-tls-certs\") pod \"neutron-5c456b7c45-bb96t\" (UID: \"09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0\") " pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.162558 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0-ovndb-tls-certs\") pod \"neutron-5c456b7c45-bb96t\" (UID: \"09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0\") " pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.162590 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59fkb\" (UniqueName: \"kubernetes.io/projected/09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0-kube-api-access-59fkb\") pod \"neutron-5c456b7c45-bb96t\" (UID: \"09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0\") " pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.162750 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0-combined-ca-bundle\") pod \"neutron-5c456b7c45-bb96t\" (UID: \"09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0\") " pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.162865 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0-public-tls-certs\") pod \"neutron-5c456b7c45-bb96t\" (UID: \"09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0\") " pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.162959 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0-config\") pod \"neutron-5c456b7c45-bb96t\" (UID: \"09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0\") " pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.163039 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0-httpd-config\") pod \"neutron-5c456b7c45-bb96t\" (UID: \"09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0\") " pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.266494 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0-combined-ca-bundle\") pod \"neutron-5c456b7c45-bb96t\" (UID: \"09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0\") " pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.266842 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0-public-tls-certs\") pod \"neutron-5c456b7c45-bb96t\" (UID: \"09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0\") " pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.266965 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0-config\") pod \"neutron-5c456b7c45-bb96t\" (UID: \"09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0\") " pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.267036 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0-httpd-config\") pod \"neutron-5c456b7c45-bb96t\" (UID: \"09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0\") " pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.267298 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0-internal-tls-certs\") pod \"neutron-5c456b7c45-bb96t\" (UID: \"09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0\") " pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.267406 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0-ovndb-tls-certs\") pod \"neutron-5c456b7c45-bb96t\" (UID: \"09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0\") " pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.267444 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59fkb\" (UniqueName: \"kubernetes.io/projected/09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0-kube-api-access-59fkb\") pod \"neutron-5c456b7c45-bb96t\" (UID: \"09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0\") " pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.274290 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0-httpd-config\") pod \"neutron-5c456b7c45-bb96t\" (UID: \"09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0\") " pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.275626 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0-internal-tls-certs\") pod \"neutron-5c456b7c45-bb96t\" (UID: \"09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0\") " pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.276011 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0-combined-ca-bundle\") pod \"neutron-5c456b7c45-bb96t\" (UID: \"09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0\") " pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.278909 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0-ovndb-tls-certs\") pod \"neutron-5c456b7c45-bb96t\" (UID: \"09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0\") " pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.280205 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0-config\") pod \"neutron-5c456b7c45-bb96t\" (UID: \"09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0\") " pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.293668 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0-public-tls-certs\") pod \"neutron-5c456b7c45-bb96t\" (UID: \"09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0\") " pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.294221 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59fkb\" (UniqueName: \"kubernetes.io/projected/09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0-kube-api-access-59fkb\") pod \"neutron-5c456b7c45-bb96t\" (UID: \"09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0\") " pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.381493 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.712983 4904 generic.go:334] "Generic (PLEG): container finished" podID="cb0deee4-46e1-4d7d-aba4-7b9483525f6f" containerID="4e0ac65d4f3bceff878cc73d871f6abb1589f7eef8361fed9c31b3c0e597c5e3" exitCode=0 Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.713073 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d97dd4c5c-7dc7d" event={"ID":"cb0deee4-46e1-4d7d-aba4-7b9483525f6f","Type":"ContainerDied","Data":"4e0ac65d4f3bceff878cc73d871f6abb1589f7eef8361fed9c31b3c0e597c5e3"} Feb 23 10:26:04 crc kubenswrapper[4904]: I0223 10:26:04.758698 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 23 10:26:05 crc kubenswrapper[4904]: W0223 10:26:05.578320 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21c928c2_98cf_48a2_b04b_e7520b36c73a.slice/crio-db574d7c0f5a1fa72b371d2345eaafa85c74b0e7ad3e3155d9936d0d2d5432d4 WatchSource:0}: Error finding container db574d7c0f5a1fa72b371d2345eaafa85c74b0e7ad3e3155d9936d0d2d5432d4: Status 404 returned error can't find the container with id db574d7c0f5a1fa72b371d2345eaafa85c74b0e7ad3e3155d9936d0d2d5432d4 Feb 23 10:26:05 crc kubenswrapper[4904]: W0223 10:26:05.579156 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d0ec2d6_470d_4122_b5ce_fb9871e5774a.slice/crio-bb89a25a0b73a9846874e993a2566d79c87ad93f5d02ba0ba16513e5de5979f7 WatchSource:0}: Error finding container bb89a25a0b73a9846874e993a2566d79c87ad93f5d02ba0ba16513e5de5979f7: Status 404 returned error can't find the container with id bb89a25a0b73a9846874e993a2566d79c87ad93f5d02ba0ba16513e5de5979f7 Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.704362 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8zkpf" Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.740727 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6955ddcccd-p7hv5" event={"ID":"21c928c2-98cf-48a2-b04b-e7520b36c73a","Type":"ContainerStarted","Data":"db574d7c0f5a1fa72b371d2345eaafa85c74b0e7ad3e3155d9936d0d2d5432d4"} Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.742650 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8f775cf4-7m6bg" event={"ID":"1d0ec2d6-470d-4122-b5ce-fb9871e5774a","Type":"ContainerStarted","Data":"bb89a25a0b73a9846874e993a2566d79c87ad93f5d02ba0ba16513e5de5979f7"} Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.747403 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8zkpf" Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.747741 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8zkpf" event={"ID":"25955027-6da1-4cce-8074-f079cf65f840","Type":"ContainerDied","Data":"a5151b22ffc669ecbb4d7f0f4d06384f13c121190d126d1be417e5ad0b8ae6d0"} Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.747770 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5151b22ffc669ecbb4d7f0f4d06384f13c121190d126d1be417e5ad0b8ae6d0" Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.808844 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25955027-6da1-4cce-8074-f079cf65f840-db-sync-config-data\") pod \"25955027-6da1-4cce-8074-f079cf65f840\" (UID: \"25955027-6da1-4cce-8074-f079cf65f840\") " Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.808986 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqch9\" (UniqueName: \"kubernetes.io/projected/25955027-6da1-4cce-8074-f079cf65f840-kube-api-access-lqch9\") pod \"25955027-6da1-4cce-8074-f079cf65f840\" (UID: \"25955027-6da1-4cce-8074-f079cf65f840\") " Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.809169 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25955027-6da1-4cce-8074-f079cf65f840-combined-ca-bundle\") pod \"25955027-6da1-4cce-8074-f079cf65f840\" (UID: \"25955027-6da1-4cce-8074-f079cf65f840\") " Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.809210 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25955027-6da1-4cce-8074-f079cf65f840-scripts\") pod \"25955027-6da1-4cce-8074-f079cf65f840\" (UID: \"25955027-6da1-4cce-8074-f079cf65f840\") " Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.809243 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25955027-6da1-4cce-8074-f079cf65f840-etc-machine-id\") pod \"25955027-6da1-4cce-8074-f079cf65f840\" (UID: \"25955027-6da1-4cce-8074-f079cf65f840\") " Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.809300 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25955027-6da1-4cce-8074-f079cf65f840-config-data\") pod \"25955027-6da1-4cce-8074-f079cf65f840\" (UID: \"25955027-6da1-4cce-8074-f079cf65f840\") " Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.818476 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25955027-6da1-4cce-8074-f079cf65f840-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "25955027-6da1-4cce-8074-f079cf65f840" (UID: "25955027-6da1-4cce-8074-f079cf65f840"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.818829 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25955027-6da1-4cce-8074-f079cf65f840-scripts" (OuterVolumeSpecName: "scripts") pod "25955027-6da1-4cce-8074-f079cf65f840" (UID: "25955027-6da1-4cce-8074-f079cf65f840"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.820935 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25955027-6da1-4cce-8074-f079cf65f840-kube-api-access-lqch9" (OuterVolumeSpecName: "kube-api-access-lqch9") pod "25955027-6da1-4cce-8074-f079cf65f840" (UID: "25955027-6da1-4cce-8074-f079cf65f840"). InnerVolumeSpecName "kube-api-access-lqch9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.847765 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25955027-6da1-4cce-8074-f079cf65f840-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25955027-6da1-4cce-8074-f079cf65f840" (UID: "25955027-6da1-4cce-8074-f079cf65f840"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.859125 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25955027-6da1-4cce-8074-f079cf65f840-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "25955027-6da1-4cce-8074-f079cf65f840" (UID: "25955027-6da1-4cce-8074-f079cf65f840"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.873220 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-d97dd4c5c-7dc7d" podUID="cb0deee4-46e1-4d7d-aba4-7b9483525f6f" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.168:9696/\": dial tcp 10.217.0.168:9696: connect: connection refused" Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.885500 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25955027-6da1-4cce-8074-f079cf65f840-config-data" (OuterVolumeSpecName: "config-data") pod "25955027-6da1-4cce-8074-f079cf65f840" (UID: "25955027-6da1-4cce-8074-f079cf65f840"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.913779 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25955027-6da1-4cce-8074-f079cf65f840-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.913820 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25955027-6da1-4cce-8074-f079cf65f840-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.913833 4904 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25955027-6da1-4cce-8074-f079cf65f840-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.913842 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25955027-6da1-4cce-8074-f079cf65f840-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.913852 4904 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25955027-6da1-4cce-8074-f079cf65f840-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:05 crc kubenswrapper[4904]: I0223 10:26:05.913863 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqch9\" (UniqueName: \"kubernetes.io/projected/25955027-6da1-4cce-8074-f079cf65f840-kube-api-access-lqch9\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.044830 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 10:26:07 crc kubenswrapper[4904]: E0223 10:26:07.045936 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25955027-6da1-4cce-8074-f079cf65f840" containerName="cinder-db-sync" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.045954 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="25955027-6da1-4cce-8074-f079cf65f840" containerName="cinder-db-sync" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.046181 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="25955027-6da1-4cce-8074-f079cf65f840" containerName="cinder-db-sync" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.047383 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.060931 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.061193 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.061315 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.061416 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-n7lsn" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.079883 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.165207 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-6svhj"] Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.167360 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29372e9-1b19-4d4d-af56-67620b1ae385-config-data\") pod \"cinder-scheduler-0\" (UID: \"c29372e9-1b19-4d4d-af56-67620b1ae385\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.167448 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c29372e9-1b19-4d4d-af56-67620b1ae385-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c29372e9-1b19-4d4d-af56-67620b1ae385\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.167724 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29372e9-1b19-4d4d-af56-67620b1ae385-scripts\") pod \"cinder-scheduler-0\" (UID: \"c29372e9-1b19-4d4d-af56-67620b1ae385\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.167846 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbhn8\" (UniqueName: \"kubernetes.io/projected/c29372e9-1b19-4d4d-af56-67620b1ae385-kube-api-access-gbhn8\") pod \"cinder-scheduler-0\" (UID: \"c29372e9-1b19-4d4d-af56-67620b1ae385\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.168023 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29372e9-1b19-4d4d-af56-67620b1ae385-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c29372e9-1b19-4d4d-af56-67620b1ae385\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.168125 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c29372e9-1b19-4d4d-af56-67620b1ae385-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c29372e9-1b19-4d4d-af56-67620b1ae385\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.223808 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-j9kcl"] Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.227456 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.238296 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.240486 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.247313 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.263323 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-j9kcl"] Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.306317 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.336808 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29372e9-1b19-4d4d-af56-67620b1ae385-config-data\") pod \"cinder-scheduler-0\" (UID: \"c29372e9-1b19-4d4d-af56-67620b1ae385\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.337017 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c29372e9-1b19-4d4d-af56-67620b1ae385-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c29372e9-1b19-4d4d-af56-67620b1ae385\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.337128 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29372e9-1b19-4d4d-af56-67620b1ae385-scripts\") pod \"cinder-scheduler-0\" (UID: \"c29372e9-1b19-4d4d-af56-67620b1ae385\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.337204 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbhn8\" (UniqueName: \"kubernetes.io/projected/c29372e9-1b19-4d4d-af56-67620b1ae385-kube-api-access-gbhn8\") pod \"cinder-scheduler-0\" (UID: \"c29372e9-1b19-4d4d-af56-67620b1ae385\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.337359 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29372e9-1b19-4d4d-af56-67620b1ae385-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c29372e9-1b19-4d4d-af56-67620b1ae385\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.337463 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c29372e9-1b19-4d4d-af56-67620b1ae385-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c29372e9-1b19-4d4d-af56-67620b1ae385\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.384107 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c29372e9-1b19-4d4d-af56-67620b1ae385-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c29372e9-1b19-4d4d-af56-67620b1ae385\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.386191 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29372e9-1b19-4d4d-af56-67620b1ae385-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c29372e9-1b19-4d4d-af56-67620b1ae385\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.388855 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.395769 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.432657 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29372e9-1b19-4d4d-af56-67620b1ae385-scripts\") pod \"cinder-scheduler-0\" (UID: \"c29372e9-1b19-4d4d-af56-67620b1ae385\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.453908 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbhn8\" (UniqueName: \"kubernetes.io/projected/c29372e9-1b19-4d4d-af56-67620b1ae385-kube-api-access-gbhn8\") pod \"cinder-scheduler-0\" (UID: \"c29372e9-1b19-4d4d-af56-67620b1ae385\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.457141 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c29372e9-1b19-4d4d-af56-67620b1ae385-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c29372e9-1b19-4d4d-af56-67620b1ae385\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.457921 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.485016 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgbrb\" (UniqueName: \"kubernetes.io/projected/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-kube-api-access-kgbrb\") pod \"dnsmasq-dns-5c9776ccc5-j9kcl\" (UID: \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.485075 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-config\") pod \"dnsmasq-dns-5c9776ccc5-j9kcl\" (UID: \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.485118 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c9930fc9-ddbc-4453-a790-4adba475fc22-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " pod="openstack/cinder-api-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.485135 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9930fc9-ddbc-4453-a790-4adba475fc22-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " pod="openstack/cinder-api-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.485152 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hbwj\" (UniqueName: \"kubernetes.io/projected/c9930fc9-ddbc-4453-a790-4adba475fc22-kube-api-access-6hbwj\") pod \"cinder-api-0\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " pod="openstack/cinder-api-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.485169 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-j9kcl\" (UID: \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.485226 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-j9kcl\" (UID: \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.485251 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9930fc9-ddbc-4453-a790-4adba475fc22-config-data-custom\") pod \"cinder-api-0\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " pod="openstack/cinder-api-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.485283 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9930fc9-ddbc-4453-a790-4adba475fc22-scripts\") pod \"cinder-api-0\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " pod="openstack/cinder-api-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.485334 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-j9kcl\" (UID: \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.485411 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9930fc9-ddbc-4453-a790-4adba475fc22-config-data\") pod \"cinder-api-0\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " pod="openstack/cinder-api-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.485433 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9930fc9-ddbc-4453-a790-4adba475fc22-logs\") pod \"cinder-api-0\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " pod="openstack/cinder-api-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.485485 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-j9kcl\" (UID: \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.491674 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29372e9-1b19-4d4d-af56-67620b1ae385-config-data\") pod \"cinder-scheduler-0\" (UID: \"c29372e9-1b19-4d4d-af56-67620b1ae385\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.588526 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgbrb\" (UniqueName: \"kubernetes.io/projected/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-kube-api-access-kgbrb\") pod \"dnsmasq-dns-5c9776ccc5-j9kcl\" (UID: \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.588580 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-config\") pod \"dnsmasq-dns-5c9776ccc5-j9kcl\" (UID: \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.588603 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hbwj\" (UniqueName: \"kubernetes.io/projected/c9930fc9-ddbc-4453-a790-4adba475fc22-kube-api-access-6hbwj\") pod \"cinder-api-0\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " pod="openstack/cinder-api-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.588623 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c9930fc9-ddbc-4453-a790-4adba475fc22-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " pod="openstack/cinder-api-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.588637 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9930fc9-ddbc-4453-a790-4adba475fc22-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " pod="openstack/cinder-api-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.588654 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-j9kcl\" (UID: \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.588692 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-j9kcl\" (UID: \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.588730 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9930fc9-ddbc-4453-a790-4adba475fc22-config-data-custom\") pod \"cinder-api-0\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " pod="openstack/cinder-api-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.588758 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9930fc9-ddbc-4453-a790-4adba475fc22-scripts\") pod \"cinder-api-0\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " pod="openstack/cinder-api-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.588804 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-j9kcl\" (UID: \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.588853 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9930fc9-ddbc-4453-a790-4adba475fc22-config-data\") pod \"cinder-api-0\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " pod="openstack/cinder-api-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.589977 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9930fc9-ddbc-4453-a790-4adba475fc22-logs\") pod \"cinder-api-0\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " pod="openstack/cinder-api-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.590047 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-j9kcl\" (UID: \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.590837 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c9930fc9-ddbc-4453-a790-4adba475fc22-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " pod="openstack/cinder-api-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.591351 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-j9kcl\" (UID: \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.591401 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-j9kcl\" (UID: \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.592306 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9930fc9-ddbc-4453-a790-4adba475fc22-logs\") pod \"cinder-api-0\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " pod="openstack/cinder-api-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.593294 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-config\") pod \"dnsmasq-dns-5c9776ccc5-j9kcl\" (UID: \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.597487 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9930fc9-ddbc-4453-a790-4adba475fc22-config-data-custom\") pod \"cinder-api-0\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " pod="openstack/cinder-api-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.600218 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9930fc9-ddbc-4453-a790-4adba475fc22-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " pod="openstack/cinder-api-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.608769 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-j9kcl\" (UID: \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.609165 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-j9kcl\" (UID: \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.613648 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9930fc9-ddbc-4453-a790-4adba475fc22-scripts\") pod \"cinder-api-0\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " pod="openstack/cinder-api-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.614014 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9930fc9-ddbc-4453-a790-4adba475fc22-config-data\") pod \"cinder-api-0\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " pod="openstack/cinder-api-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.615996 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgbrb\" (UniqueName: \"kubernetes.io/projected/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-kube-api-access-kgbrb\") pod \"dnsmasq-dns-5c9776ccc5-j9kcl\" (UID: \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.616543 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hbwj\" (UniqueName: \"kubernetes.io/projected/c9930fc9-ddbc-4453-a790-4adba475fc22-kube-api-access-6hbwj\") pod \"cinder-api-0\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " pod="openstack/cinder-api-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.718604 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.734539 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-n7lsn" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.742816 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.744619 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.783181 4904 generic.go:334] "Generic (PLEG): container finished" podID="cb0deee4-46e1-4d7d-aba4-7b9483525f6f" containerID="0dc465a92ee7151385a1192b02a565c1e08537f972d1c098d94d54a45856ccb3" exitCode=0 Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.783329 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d97dd4c5c-7dc7d" event={"ID":"cb0deee4-46e1-4d7d-aba4-7b9483525f6f","Type":"ContainerDied","Data":"0dc465a92ee7151385a1192b02a565c1e08537f972d1c098d94d54a45856ccb3"} Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.789440 4904 generic.go:334] "Generic (PLEG): container finished" podID="51427d17-4627-402f-a41d-987ab62579a5" containerID="257f56e6f25ff62cb3502c2a87d3bda8e7b75c1e0b265bffb9cc147310ab1b0f" exitCode=137 Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.789496 4904 generic.go:334] "Generic (PLEG): container finished" podID="51427d17-4627-402f-a41d-987ab62579a5" containerID="3e0417c569d90b16c1eda9e930e34f06958062ae8a7a52167e41b9dceeebaef2" exitCode=137 Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.789547 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-544b9cc98f-nzzsf" event={"ID":"51427d17-4627-402f-a41d-987ab62579a5","Type":"ContainerDied","Data":"257f56e6f25ff62cb3502c2a87d3bda8e7b75c1e0b265bffb9cc147310ab1b0f"} Feb 23 10:26:07 crc kubenswrapper[4904]: I0223 10:26:07.789589 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-544b9cc98f-nzzsf" event={"ID":"51427d17-4627-402f-a41d-987ab62579a5","Type":"ContainerDied","Data":"3e0417c569d90b16c1eda9e930e34f06958062ae8a7a52167e41b9dceeebaef2"} Feb 23 10:26:09 crc kubenswrapper[4904]: I0223 10:26:09.412087 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 23 10:26:09 crc kubenswrapper[4904]: I0223 10:26:09.649856 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-j9kcl"] Feb 23 10:26:09 crc kubenswrapper[4904]: I0223 10:26:09.744908 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-f6485bd78-lkn6x"] Feb 23 10:26:09 crc kubenswrapper[4904]: W0223 10:26:09.895032 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded1d8fb3_01b5_49ff_b7c3_66cfb7ff32e7.slice/crio-6c10e2b4d82860b73723f964841c530135d251a16acff3e9366dd34bf4210a7f WatchSource:0}: Error finding container 6c10e2b4d82860b73723f964841c530135d251a16acff3e9366dd34bf4210a7f: Status 404 returned error can't find the container with id 6c10e2b4d82860b73723f964841c530135d251a16acff3e9366dd34bf4210a7f Feb 23 10:26:09 crc kubenswrapper[4904]: W0223 10:26:09.901867 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7936ce2_c31b_4a34_92c6_3e1ca7bde7a8.slice/crio-8ef7ac5716479e81344298bb908afe5f754772ec29520c733d75c9bd135f4527 WatchSource:0}: Error finding container 8ef7ac5716479e81344298bb908afe5f754772ec29520c733d75c9bd135f4527: Status 404 returned error can't find the container with id 8ef7ac5716479e81344298bb908afe5f754772ec29520c733d75c9bd135f4527 Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.181868 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.251979 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-544b9cc98f-nzzsf" Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.270782 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-internal-tls-certs\") pod \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.271050 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-config\") pod \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.271204 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z5fs\" (UniqueName: \"kubernetes.io/projected/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-kube-api-access-9z5fs\") pod \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.271460 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-public-tls-certs\") pod \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.271628 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-httpd-config\") pod \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.271708 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-ovndb-tls-certs\") pod \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.271807 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-combined-ca-bundle\") pod \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.310896 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "cb0deee4-46e1-4d7d-aba4-7b9483525f6f" (UID: "cb0deee4-46e1-4d7d-aba4-7b9483525f6f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.331057 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-kube-api-access-9z5fs" (OuterVolumeSpecName: "kube-api-access-9z5fs") pod "cb0deee4-46e1-4d7d-aba4-7b9483525f6f" (UID: "cb0deee4-46e1-4d7d-aba4-7b9483525f6f"). InnerVolumeSpecName "kube-api-access-9z5fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.377786 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96t8c\" (UniqueName: \"kubernetes.io/projected/51427d17-4627-402f-a41d-987ab62579a5-kube-api-access-96t8c\") pod \"51427d17-4627-402f-a41d-987ab62579a5\" (UID: \"51427d17-4627-402f-a41d-987ab62579a5\") " Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.377892 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/51427d17-4627-402f-a41d-987ab62579a5-horizon-secret-key\") pod \"51427d17-4627-402f-a41d-987ab62579a5\" (UID: \"51427d17-4627-402f-a41d-987ab62579a5\") " Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.378005 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51427d17-4627-402f-a41d-987ab62579a5-logs\") pod \"51427d17-4627-402f-a41d-987ab62579a5\" (UID: \"51427d17-4627-402f-a41d-987ab62579a5\") " Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.378579 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51427d17-4627-402f-a41d-987ab62579a5-logs" (OuterVolumeSpecName: "logs") pod "51427d17-4627-402f-a41d-987ab62579a5" (UID: "51427d17-4627-402f-a41d-987ab62579a5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.378917 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51427d17-4627-402f-a41d-987ab62579a5-scripts\") pod \"51427d17-4627-402f-a41d-987ab62579a5\" (UID: \"51427d17-4627-402f-a41d-987ab62579a5\") " Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.379663 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/51427d17-4627-402f-a41d-987ab62579a5-config-data\") pod \"51427d17-4627-402f-a41d-987ab62579a5\" (UID: \"51427d17-4627-402f-a41d-987ab62579a5\") " Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.388585 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z5fs\" (UniqueName: \"kubernetes.io/projected/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-kube-api-access-9z5fs\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.389525 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51427d17-4627-402f-a41d-987ab62579a5-logs\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.389559 4904 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.400866 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51427d17-4627-402f-a41d-987ab62579a5-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "51427d17-4627-402f-a41d-987ab62579a5" (UID: "51427d17-4627-402f-a41d-987ab62579a5"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.400892 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51427d17-4627-402f-a41d-987ab62579a5-kube-api-access-96t8c" (OuterVolumeSpecName: "kube-api-access-96t8c") pod "51427d17-4627-402f-a41d-987ab62579a5" (UID: "51427d17-4627-402f-a41d-987ab62579a5"). InnerVolumeSpecName "kube-api-access-96t8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.465976 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.502231 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96t8c\" (UniqueName: \"kubernetes.io/projected/51427d17-4627-402f-a41d-987ab62579a5-kube-api-access-96t8c\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.502279 4904 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/51427d17-4627-402f-a41d-987ab62579a5-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.613771 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 23 10:26:10 crc kubenswrapper[4904]: E0223 10:26:10.694268 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="33f04219-ca0b-4cc3-86c5-67e0ceaf18ad" Feb 23 10:26:10 crc kubenswrapper[4904]: W0223 10:26:10.710379 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9930fc9_ddbc_4453_a790_4adba475fc22.slice/crio-21b429d5c9769fdd16ae07dd78e105006321d8988082ab762703ff06b177c0d0 WatchSource:0}: Error finding container 21b429d5c9769fdd16ae07dd78e105006321d8988082ab762703ff06b177c0d0: Status 404 returned error can't find the container with id 21b429d5c9769fdd16ae07dd78e105006321d8988082ab762703ff06b177c0d0 Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.745343 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c456b7c45-bb96t"] Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.851260 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51427d17-4627-402f-a41d-987ab62579a5-scripts" (OuterVolumeSpecName: "scripts") pod "51427d17-4627-402f-a41d-987ab62579a5" (UID: "51427d17-4627-402f-a41d-987ab62579a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.852913 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d97dd4c5c-7dc7d" event={"ID":"cb0deee4-46e1-4d7d-aba4-7b9483525f6f","Type":"ContainerDied","Data":"4454e4b550d97809d3f3a2670b89dc881f1702389d358ca00b011a940f7aa767"} Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.852953 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d97dd4c5c-7dc7d" Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.852982 4904 scope.go:117] "RemoveContainer" containerID="4e0ac65d4f3bceff878cc73d871f6abb1589f7eef8361fed9c31b3c0e597c5e3" Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.863905 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c9930fc9-ddbc-4453-a790-4adba475fc22","Type":"ContainerStarted","Data":"21b429d5c9769fdd16ae07dd78e105006321d8988082ab762703ff06b177c0d0"} Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.876161 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad","Type":"ContainerStarted","Data":"da8d2da232f3f0cd5e3242109cb0fbad0dd24c64a1829b3b066bec68d065d228"} Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.878531 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.878583 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33f04219-ca0b-4cc3-86c5-67e0ceaf18ad" containerName="proxy-httpd" containerID="cri-o://da8d2da232f3f0cd5e3242109cb0fbad0dd24c64a1829b3b066bec68d065d228" gracePeriod=30 Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.878597 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33f04219-ca0b-4cc3-86c5-67e0ceaf18ad" containerName="sg-core" containerID="cri-o://6081ce68167e59f903787e2ef7815cd75e1e31249b818587c491f38a71a3b36e" gracePeriod=30 Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.877842 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33f04219-ca0b-4cc3-86c5-67e0ceaf18ad" containerName="ceilometer-notification-agent" containerID="cri-o://5622d2cefca58b2c6dab75a11d7b5ccea91386d592f09ff7a19a6c956bd5071f" gracePeriod=30 Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.885104 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c29372e9-1b19-4d4d-af56-67620b1ae385","Type":"ContainerStarted","Data":"5896b1e5ffc2e0dcff4e99062d79c088954715cff94aa96ade6e36813d5d28ea"} Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.888612 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8f775cf4-7m6bg" event={"ID":"1d0ec2d6-470d-4122-b5ce-fb9871e5774a","Type":"ContainerStarted","Data":"dd2ae5adec58dc1fb365b666bd1b6c84528f2f87dbd738002fea908b48b866c0"} Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.909685 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-544b9cc98f-nzzsf" event={"ID":"51427d17-4627-402f-a41d-987ab62579a5","Type":"ContainerDied","Data":"e72b373c3ca08b1716f8c3801777ced4234f9314425d3c4b447939c55c69dba0"} Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.909846 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-544b9cc98f-nzzsf" Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.918473 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51427d17-4627-402f-a41d-987ab62579a5-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.919297 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-6svhj" event={"ID":"f18bd80b-4e0b-4785-83c9-ede9b77d0a43","Type":"ContainerStarted","Data":"59b92ffdbdd5d2ca352a30fedb36cacb4ec175b093dfdae9bd7d2e2977defb88"} Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.919546 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-6svhj" podUID="f18bd80b-4e0b-4785-83c9-ede9b77d0a43" containerName="dnsmasq-dns" containerID="cri-o://59b92ffdbdd5d2ca352a30fedb36cacb4ec175b093dfdae9bd7d2e2977defb88" gracePeriod=10 Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.919874 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-6svhj" Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.921585 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c456b7c45-bb96t" event={"ID":"09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0","Type":"ContainerStarted","Data":"0cb5ae004b90edb5adc269238a00ee1101366dec28315dc67e2f613cb6ef0387"} Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.923627 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f6485bd78-lkn6x" event={"ID":"ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7","Type":"ContainerStarted","Data":"6c10e2b4d82860b73723f964841c530135d251a16acff3e9366dd34bf4210a7f"} Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.967247 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" event={"ID":"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8","Type":"ContainerStarted","Data":"8ef7ac5716479e81344298bb908afe5f754772ec29520c733d75c9bd135f4527"} Feb 23 10:26:10 crc kubenswrapper[4904]: I0223 10:26:10.967446 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-6svhj" podStartSLOduration=12.967433886 podStartE2EDuration="12.967433886s" podCreationTimestamp="2026-02-23 10:25:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:26:10.948296172 +0000 UTC m=+1204.368669705" watchObservedRunningTime="2026-02-23 10:26:10.967433886 +0000 UTC m=+1204.387807399" Feb 23 10:26:11 crc kubenswrapper[4904]: I0223 10:26:11.199132 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:26:11 crc kubenswrapper[4904]: I0223 10:26:11.220938 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cb0deee4-46e1-4d7d-aba4-7b9483525f6f" (UID: "cb0deee4-46e1-4d7d-aba4-7b9483525f6f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:11 crc kubenswrapper[4904]: I0223 10:26:11.226918 4904 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:11 crc kubenswrapper[4904]: I0223 10:26:11.616014 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-config" (OuterVolumeSpecName: "config") pod "cb0deee4-46e1-4d7d-aba4-7b9483525f6f" (UID: "cb0deee4-46e1-4d7d-aba4-7b9483525f6f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:11 crc kubenswrapper[4904]: I0223 10:26:11.644512 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:11 crc kubenswrapper[4904]: I0223 10:26:11.713862 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51427d17-4627-402f-a41d-987ab62579a5-config-data" (OuterVolumeSpecName: "config-data") pod "51427d17-4627-402f-a41d-987ab62579a5" (UID: "51427d17-4627-402f-a41d-987ab62579a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:26:11 crc kubenswrapper[4904]: I0223 10:26:11.744246 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cb0deee4-46e1-4d7d-aba4-7b9483525f6f" (UID: "cb0deee4-46e1-4d7d-aba4-7b9483525f6f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:11 crc kubenswrapper[4904]: I0223 10:26:11.745842 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb0deee4-46e1-4d7d-aba4-7b9483525f6f" (UID: "cb0deee4-46e1-4d7d-aba4-7b9483525f6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:11 crc kubenswrapper[4904]: I0223 10:26:11.748479 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-combined-ca-bundle\") pod \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\" (UID: \"cb0deee4-46e1-4d7d-aba4-7b9483525f6f\") " Feb 23 10:26:11 crc kubenswrapper[4904]: W0223 10:26:11.748918 4904 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/cb0deee4-46e1-4d7d-aba4-7b9483525f6f/volumes/kubernetes.io~secret/combined-ca-bundle Feb 23 10:26:11 crc kubenswrapper[4904]: I0223 10:26:11.752754 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb0deee4-46e1-4d7d-aba4-7b9483525f6f" (UID: "cb0deee4-46e1-4d7d-aba4-7b9483525f6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:11 crc kubenswrapper[4904]: I0223 10:26:11.753941 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/51427d17-4627-402f-a41d-987ab62579a5-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:11 crc kubenswrapper[4904]: I0223 10:26:11.753988 4904 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:11 crc kubenswrapper[4904]: I0223 10:26:11.754003 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:11 crc kubenswrapper[4904]: I0223 10:26:11.799414 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "cb0deee4-46e1-4d7d-aba4-7b9483525f6f" (UID: "cb0deee4-46e1-4d7d-aba4-7b9483525f6f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:11 crc kubenswrapper[4904]: I0223 10:26:11.859700 4904 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb0deee4-46e1-4d7d-aba4-7b9483525f6f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.018861 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6955ddcccd-p7hv5" event={"ID":"21c928c2-98cf-48a2-b04b-e7520b36c73a","Type":"ContainerStarted","Data":"be9e98e0875fa7c40ea4f7ab9800fa5d2bea1f815383f7b7efad41d88a35f28a"} Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.018920 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-56bb764cb4-74prt"] Feb 23 10:26:12 crc kubenswrapper[4904]: E0223 10:26:12.019573 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb0deee4-46e1-4d7d-aba4-7b9483525f6f" containerName="neutron-api" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.019599 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb0deee4-46e1-4d7d-aba4-7b9483525f6f" containerName="neutron-api" Feb 23 10:26:12 crc kubenswrapper[4904]: E0223 10:26:12.019628 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51427d17-4627-402f-a41d-987ab62579a5" containerName="horizon" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.019635 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="51427d17-4627-402f-a41d-987ab62579a5" containerName="horizon" Feb 23 10:26:12 crc kubenswrapper[4904]: E0223 10:26:12.019655 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51427d17-4627-402f-a41d-987ab62579a5" containerName="horizon-log" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.019660 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="51427d17-4627-402f-a41d-987ab62579a5" containerName="horizon-log" Feb 23 10:26:12 crc kubenswrapper[4904]: E0223 10:26:12.019681 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb0deee4-46e1-4d7d-aba4-7b9483525f6f" containerName="neutron-httpd" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.019687 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb0deee4-46e1-4d7d-aba4-7b9483525f6f" containerName="neutron-httpd" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.021842 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="51427d17-4627-402f-a41d-987ab62579a5" containerName="horizon" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.021878 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="51427d17-4627-402f-a41d-987ab62579a5" containerName="horizon-log" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.021901 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb0deee4-46e1-4d7d-aba4-7b9483525f6f" containerName="neutron-httpd" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.021917 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb0deee4-46e1-4d7d-aba4-7b9483525f6f" containerName="neutron-api" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.023773 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-56bb764cb4-74prt"] Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.023887 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.026276 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad","Type":"ContainerDied","Data":"da8d2da232f3f0cd5e3242109cb0fbad0dd24c64a1829b3b066bec68d065d228"} Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.026310 4904 generic.go:334] "Generic (PLEG): container finished" podID="33f04219-ca0b-4cc3-86c5-67e0ceaf18ad" containerID="da8d2da232f3f0cd5e3242109cb0fbad0dd24c64a1829b3b066bec68d065d228" exitCode=0 Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.026354 4904 generic.go:334] "Generic (PLEG): container finished" podID="33f04219-ca0b-4cc3-86c5-67e0ceaf18ad" containerID="6081ce68167e59f903787e2ef7815cd75e1e31249b818587c491f38a71a3b36e" exitCode=2 Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.026425 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad","Type":"ContainerDied","Data":"6081ce68167e59f903787e2ef7815cd75e1e31249b818587c491f38a71a3b36e"} Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.032178 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f6485bd78-lkn6x" event={"ID":"ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7","Type":"ContainerStarted","Data":"d955c940c2de263b373e71ddb76753c7084d5b6d3e1e90db11dde102e4a47aac"} Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.049484 4904 generic.go:334] "Generic (PLEG): container finished" podID="e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8" containerID="c076d0c6ee0ed5b336708911364ec17de0aa89279e11b0f9842757a95cc2b772" exitCode=0 Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.049551 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" event={"ID":"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8","Type":"ContainerDied","Data":"c076d0c6ee0ed5b336708911364ec17de0aa89279e11b0f9842757a95cc2b772"} Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.058297 4904 generic.go:334] "Generic (PLEG): container finished" podID="f18bd80b-4e0b-4785-83c9-ede9b77d0a43" containerID="59b92ffdbdd5d2ca352a30fedb36cacb4ec175b093dfdae9bd7d2e2977defb88" exitCode=0 Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.058608 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-6svhj" event={"ID":"f18bd80b-4e0b-4785-83c9-ede9b77d0a43","Type":"ContainerDied","Data":"59b92ffdbdd5d2ca352a30fedb36cacb4ec175b093dfdae9bd7d2e2977defb88"} Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.064190 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a81e676-6e99-4cf5-88a8-78d0bb36f896-logs\") pod \"placement-56bb764cb4-74prt\" (UID: \"3a81e676-6e99-4cf5-88a8-78d0bb36f896\") " pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.064245 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a81e676-6e99-4cf5-88a8-78d0bb36f896-combined-ca-bundle\") pod \"placement-56bb764cb4-74prt\" (UID: \"3a81e676-6e99-4cf5-88a8-78d0bb36f896\") " pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.064299 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a81e676-6e99-4cf5-88a8-78d0bb36f896-internal-tls-certs\") pod \"placement-56bb764cb4-74prt\" (UID: \"3a81e676-6e99-4cf5-88a8-78d0bb36f896\") " pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.064321 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a81e676-6e99-4cf5-88a8-78d0bb36f896-scripts\") pod \"placement-56bb764cb4-74prt\" (UID: \"3a81e676-6e99-4cf5-88a8-78d0bb36f896\") " pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.064340 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5crlw\" (UniqueName: \"kubernetes.io/projected/3a81e676-6e99-4cf5-88a8-78d0bb36f896-kube-api-access-5crlw\") pod \"placement-56bb764cb4-74prt\" (UID: \"3a81e676-6e99-4cf5-88a8-78d0bb36f896\") " pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.064369 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a81e676-6e99-4cf5-88a8-78d0bb36f896-config-data\") pod \"placement-56bb764cb4-74prt\" (UID: \"3a81e676-6e99-4cf5-88a8-78d0bb36f896\") " pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.064440 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a81e676-6e99-4cf5-88a8-78d0bb36f896-public-tls-certs\") pod \"placement-56bb764cb4-74prt\" (UID: \"3a81e676-6e99-4cf5-88a8-78d0bb36f896\") " pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.079087 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d8dc7bdcf-447fv" event={"ID":"fd21ac44-8c6b-4db7-b5ed-76b0503419dc","Type":"ContainerStarted","Data":"e16459978c9af2ef58ec81808954b24c33e2ae9953782761afec0ea18a90dd5a"} Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.091797 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-544b9cc98f-nzzsf"] Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.110260 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-544b9cc98f-nzzsf"] Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.176590 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d97dd4c5c-7dc7d"] Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.185777 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a81e676-6e99-4cf5-88a8-78d0bb36f896-internal-tls-certs\") pod \"placement-56bb764cb4-74prt\" (UID: \"3a81e676-6e99-4cf5-88a8-78d0bb36f896\") " pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.185860 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a81e676-6e99-4cf5-88a8-78d0bb36f896-scripts\") pod \"placement-56bb764cb4-74prt\" (UID: \"3a81e676-6e99-4cf5-88a8-78d0bb36f896\") " pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.185887 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5crlw\" (UniqueName: \"kubernetes.io/projected/3a81e676-6e99-4cf5-88a8-78d0bb36f896-kube-api-access-5crlw\") pod \"placement-56bb764cb4-74prt\" (UID: \"3a81e676-6e99-4cf5-88a8-78d0bb36f896\") " pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.185939 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a81e676-6e99-4cf5-88a8-78d0bb36f896-config-data\") pod \"placement-56bb764cb4-74prt\" (UID: \"3a81e676-6e99-4cf5-88a8-78d0bb36f896\") " pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.186110 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a81e676-6e99-4cf5-88a8-78d0bb36f896-public-tls-certs\") pod \"placement-56bb764cb4-74prt\" (UID: \"3a81e676-6e99-4cf5-88a8-78d0bb36f896\") " pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.186268 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a81e676-6e99-4cf5-88a8-78d0bb36f896-logs\") pod \"placement-56bb764cb4-74prt\" (UID: \"3a81e676-6e99-4cf5-88a8-78d0bb36f896\") " pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.186292 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a81e676-6e99-4cf5-88a8-78d0bb36f896-combined-ca-bundle\") pod \"placement-56bb764cb4-74prt\" (UID: \"3a81e676-6e99-4cf5-88a8-78d0bb36f896\") " pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.194103 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a81e676-6e99-4cf5-88a8-78d0bb36f896-logs\") pod \"placement-56bb764cb4-74prt\" (UID: \"3a81e676-6e99-4cf5-88a8-78d0bb36f896\") " pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.196687 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a81e676-6e99-4cf5-88a8-78d0bb36f896-public-tls-certs\") pod \"placement-56bb764cb4-74prt\" (UID: \"3a81e676-6e99-4cf5-88a8-78d0bb36f896\") " pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.197806 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a81e676-6e99-4cf5-88a8-78d0bb36f896-scripts\") pod \"placement-56bb764cb4-74prt\" (UID: \"3a81e676-6e99-4cf5-88a8-78d0bb36f896\") " pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.218863 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a81e676-6e99-4cf5-88a8-78d0bb36f896-combined-ca-bundle\") pod \"placement-56bb764cb4-74prt\" (UID: \"3a81e676-6e99-4cf5-88a8-78d0bb36f896\") " pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.222698 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a81e676-6e99-4cf5-88a8-78d0bb36f896-config-data\") pod \"placement-56bb764cb4-74prt\" (UID: \"3a81e676-6e99-4cf5-88a8-78d0bb36f896\") " pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.223943 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d97dd4c5c-7dc7d"] Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.228404 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a81e676-6e99-4cf5-88a8-78d0bb36f896-internal-tls-certs\") pod \"placement-56bb764cb4-74prt\" (UID: \"3a81e676-6e99-4cf5-88a8-78d0bb36f896\") " pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.248387 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5crlw\" (UniqueName: \"kubernetes.io/projected/3a81e676-6e99-4cf5-88a8-78d0bb36f896-kube-api-access-5crlw\") pod \"placement-56bb764cb4-74prt\" (UID: \"3a81e676-6e99-4cf5-88a8-78d0bb36f896\") " pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.264284 4904 scope.go:117] "RemoveContainer" containerID="0dc465a92ee7151385a1192b02a565c1e08537f972d1c098d94d54a45856ccb3" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.360172 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.450614 4904 scope.go:117] "RemoveContainer" containerID="257f56e6f25ff62cb3502c2a87d3bda8e7b75c1e0b265bffb9cc147310ab1b0f" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.500827 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-6svhj" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.598306 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-config\") pod \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\" (UID: \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\") " Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.598509 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-ovsdbserver-sb\") pod \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\" (UID: \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\") " Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.598602 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-dns-svc\") pod \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\" (UID: \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\") " Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.598631 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mglp5\" (UniqueName: \"kubernetes.io/projected/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-kube-api-access-mglp5\") pod \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\" (UID: \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\") " Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.598756 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-ovsdbserver-nb\") pod \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\" (UID: \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\") " Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.598821 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-dns-swift-storage-0\") pod \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\" (UID: \"f18bd80b-4e0b-4785-83c9-ede9b77d0a43\") " Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.710111 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-kube-api-access-mglp5" (OuterVolumeSpecName: "kube-api-access-mglp5") pod "f18bd80b-4e0b-4785-83c9-ede9b77d0a43" (UID: "f18bd80b-4e0b-4785-83c9-ede9b77d0a43"). InnerVolumeSpecName "kube-api-access-mglp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.809487 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mglp5\" (UniqueName: \"kubernetes.io/projected/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-kube-api-access-mglp5\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:12 crc kubenswrapper[4904]: I0223 10:26:12.849603 4904 scope.go:117] "RemoveContainer" containerID="3e0417c569d90b16c1eda9e930e34f06958062ae8a7a52167e41b9dceeebaef2" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.053260 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-56bb764cb4-74prt"] Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.148600 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c9930fc9-ddbc-4453-a790-4adba475fc22","Type":"ContainerStarted","Data":"9991ec1fa9fda16a1e97b8d4cd3daef6ec62a844749692612863a9be887b2c7d"} Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.151932 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-6svhj" event={"ID":"f18bd80b-4e0b-4785-83c9-ede9b77d0a43","Type":"ContainerDied","Data":"bd3979fe647fef15ce239357ae30c658decde44437fa1e0c759284b131249aff"} Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.152006 4904 scope.go:117] "RemoveContainer" containerID="59b92ffdbdd5d2ca352a30fedb36cacb4ec175b093dfdae9bd7d2e2977defb88" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.152214 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-6svhj" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.178654 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d8dc7bdcf-447fv" event={"ID":"fd21ac44-8c6b-4db7-b5ed-76b0503419dc","Type":"ContainerStarted","Data":"c26ee2f426f80a85fdbe1359b493ccf7be8a7837cf3065b9906bc48ab0c90dae"} Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.186615 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6955ddcccd-p7hv5" event={"ID":"21c928c2-98cf-48a2-b04b-e7520b36c73a","Type":"ContainerStarted","Data":"e9e026f2cdec02e2c171371c863bf3b4134ccce0926b6e25535d626969976d01"} Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.197899 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f6485bd78-lkn6x" event={"ID":"ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7","Type":"ContainerStarted","Data":"c6e61c62c5f6ec3ba40ff2a030c046edf6ec5049bddbe5ff015c94ad9f5366ac"} Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.199119 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.199213 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.245493 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7d8dc7bdcf-447fv" podStartSLOduration=4.976508666 podStartE2EDuration="15.245468379s" podCreationTimestamp="2026-02-23 10:25:58 +0000 UTC" firstStartedPulling="2026-02-23 10:25:59.729772851 +0000 UTC m=+1193.150146364" lastFinishedPulling="2026-02-23 10:26:09.998732574 +0000 UTC m=+1203.419106077" observedRunningTime="2026-02-23 10:26:13.205338879 +0000 UTC m=+1206.625712422" watchObservedRunningTime="2026-02-23 10:26:13.245468379 +0000 UTC m=+1206.665841892" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.251271 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8f775cf4-7m6bg" event={"ID":"1d0ec2d6-470d-4122-b5ce-fb9871e5774a","Type":"ContainerStarted","Data":"249f89cd6933b705b98cbd7b967fd3b62fead1f4c396e98e0b4e6d5c38cee342"} Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.251843 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8f775cf4-7m6bg" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.251975 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8f775cf4-7m6bg" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.253031 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-f6485bd78-lkn6x" podStartSLOduration=11.253019534 podStartE2EDuration="11.253019534s" podCreationTimestamp="2026-02-23 10:26:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:26:13.242590087 +0000 UTC m=+1206.662963600" watchObservedRunningTime="2026-02-23 10:26:13.253019534 +0000 UTC m=+1206.673393047" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.261085 4904 generic.go:334] "Generic (PLEG): container finished" podID="33f04219-ca0b-4cc3-86c5-67e0ceaf18ad" containerID="5622d2cefca58b2c6dab75a11d7b5ccea91386d592f09ff7a19a6c956bd5071f" exitCode=0 Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.290564 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51427d17-4627-402f-a41d-987ab62579a5" path="/var/lib/kubelet/pods/51427d17-4627-402f-a41d-987ab62579a5/volumes" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.292193 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb0deee4-46e1-4d7d-aba4-7b9483525f6f" path="/var/lib/kubelet/pods/cb0deee4-46e1-4d7d-aba4-7b9483525f6f/volumes" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.335796 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6955ddcccd-p7hv5" podStartSLOduration=10.935566579 podStartE2EDuration="15.335773656s" podCreationTimestamp="2026-02-23 10:25:58 +0000 UTC" firstStartedPulling="2026-02-23 10:26:05.604075935 +0000 UTC m=+1199.024449448" lastFinishedPulling="2026-02-23 10:26:10.004283012 +0000 UTC m=+1203.424656525" observedRunningTime="2026-02-23 10:26:13.280571837 +0000 UTC m=+1206.700945350" watchObservedRunningTime="2026-02-23 10:26:13.335773656 +0000 UTC m=+1206.756147169" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.340700 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-8f775cf4-7m6bg" podStartSLOduration=14.340692966 podStartE2EDuration="14.340692966s" podCreationTimestamp="2026-02-23 10:25:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:26:13.317582439 +0000 UTC m=+1206.737955952" watchObservedRunningTime="2026-02-23 10:26:13.340692966 +0000 UTC m=+1206.761066479" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.390453 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f18bd80b-4e0b-4785-83c9-ede9b77d0a43" (UID: "f18bd80b-4e0b-4785-83c9-ede9b77d0a43"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.439724 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.473147 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f18bd80b-4e0b-4785-83c9-ede9b77d0a43" (UID: "f18bd80b-4e0b-4785-83c9-ede9b77d0a43"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.508243 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f18bd80b-4e0b-4785-83c9-ede9b77d0a43" (UID: "f18bd80b-4e0b-4785-83c9-ede9b77d0a43"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.539216 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f18bd80b-4e0b-4785-83c9-ede9b77d0a43" (UID: "f18bd80b-4e0b-4785-83c9-ede9b77d0a43"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.546312 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-config" (OuterVolumeSpecName: "config") pod "f18bd80b-4e0b-4785-83c9-ede9b77d0a43" (UID: "f18bd80b-4e0b-4785-83c9-ede9b77d0a43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.551248 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.551299 4904 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.551312 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.551323 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f18bd80b-4e0b-4785-83c9-ede9b77d0a43-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.719967 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56bb764cb4-74prt" event={"ID":"3a81e676-6e99-4cf5-88a8-78d0bb36f896","Type":"ContainerStarted","Data":"ff8ee5a13b2ed0ec788defa031ad048a523be4a55d2f4b8ab4a21d734a0c1a90"} Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.720306 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad","Type":"ContainerDied","Data":"5622d2cefca58b2c6dab75a11d7b5ccea91386d592f09ff7a19a6c956bd5071f"} Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.720332 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c456b7c45-bb96t" event={"ID":"09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0","Type":"ContainerStarted","Data":"02702500c0d44738d03abc524d1b7687fbbca22f8aebb3c69284657104d2648b"} Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.736585 4904 scope.go:117] "RemoveContainer" containerID="09115dd7e2f467c09b29d1d001329cab615110cca63c9b8459b0033a3c5243ad" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.739481 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.833004 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-6svhj"] Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.842852 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-6svhj"] Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.859188 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-sg-core-conf-yaml\") pod \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.859346 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6wlw\" (UniqueName: \"kubernetes.io/projected/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-kube-api-access-l6wlw\") pod \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.860081 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-combined-ca-bundle\") pod \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.860141 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-config-data\") pod \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.860254 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-run-httpd\") pod \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.860398 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-scripts\") pod \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.860457 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-log-httpd\") pod \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\" (UID: \"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad\") " Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.860901 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "33f04219-ca0b-4cc3-86c5-67e0ceaf18ad" (UID: "33f04219-ca0b-4cc3-86c5-67e0ceaf18ad"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.861304 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "33f04219-ca0b-4cc3-86c5-67e0ceaf18ad" (UID: "33f04219-ca0b-4cc3-86c5-67e0ceaf18ad"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.861420 4904 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.878321 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-scripts" (OuterVolumeSpecName: "scripts") pod "33f04219-ca0b-4cc3-86c5-67e0ceaf18ad" (UID: "33f04219-ca0b-4cc3-86c5-67e0ceaf18ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.892807 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-kube-api-access-l6wlw" (OuterVolumeSpecName: "kube-api-access-l6wlw") pod "33f04219-ca0b-4cc3-86c5-67e0ceaf18ad" (UID: "33f04219-ca0b-4cc3-86c5-67e0ceaf18ad"). InnerVolumeSpecName "kube-api-access-l6wlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.963226 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.963357 4904 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:13 crc kubenswrapper[4904]: I0223 10:26:13.963417 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6wlw\" (UniqueName: \"kubernetes.io/projected/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-kube-api-access-l6wlw\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.003168 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "33f04219-ca0b-4cc3-86c5-67e0ceaf18ad" (UID: "33f04219-ca0b-4cc3-86c5-67e0ceaf18ad"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.051404 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33f04219-ca0b-4cc3-86c5-67e0ceaf18ad" (UID: "33f04219-ca0b-4cc3-86c5-67e0ceaf18ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.070266 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.070539 4904 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.101535 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-config-data" (OuterVolumeSpecName: "config-data") pod "33f04219-ca0b-4cc3-86c5-67e0ceaf18ad" (UID: "33f04219-ca0b-4cc3-86c5-67e0ceaf18ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.173775 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.332004 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c456b7c45-bb96t" event={"ID":"09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0","Type":"ContainerStarted","Data":"0bbcfd740b29c1f1abd80439218f22774ff219b0f2e9874f71a7fa4a09244898"} Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.333002 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.337433 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" event={"ID":"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8","Type":"ContainerStarted","Data":"00a13767afa68fb19bc0e9ec1ab15ad8f8655756c11a201da0afdee435585226"} Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.339049 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.343387 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56bb764cb4-74prt" event={"ID":"3a81e676-6e99-4cf5-88a8-78d0bb36f896","Type":"ContainerStarted","Data":"19a309767130816fbdf21913682d4a718f0191da7508e59d3294143df89dc533"} Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.350858 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.352134 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33f04219-ca0b-4cc3-86c5-67e0ceaf18ad","Type":"ContainerDied","Data":"7b315b7fa289f8ff09ef62d4f329d493d7fd293bfc23529757e5e0663330fed2"} Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.352196 4904 scope.go:117] "RemoveContainer" containerID="da8d2da232f3f0cd5e3242109cb0fbad0dd24c64a1829b3b066bec68d065d228" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.370190 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c29372e9-1b19-4d4d-af56-67620b1ae385","Type":"ContainerStarted","Data":"40548c3bf23bcc055376a3737e8aeadd7aca30192330cfdafcb17930110e5c22"} Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.372601 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5c456b7c45-bb96t" podStartSLOduration=11.372569083 podStartE2EDuration="11.372569083s" podCreationTimestamp="2026-02-23 10:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:26:14.359620845 +0000 UTC m=+1207.779994358" watchObservedRunningTime="2026-02-23 10:26:14.372569083 +0000 UTC m=+1207.792942596" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.402088 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" podStartSLOduration=7.402067751 podStartE2EDuration="7.402067751s" podCreationTimestamp="2026-02-23 10:26:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:26:14.389324139 +0000 UTC m=+1207.809697652" watchObservedRunningTime="2026-02-23 10:26:14.402067751 +0000 UTC m=+1207.822441264" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.504623 4904 scope.go:117] "RemoveContainer" containerID="6081ce68167e59f903787e2ef7815cd75e1e31249b818587c491f38a71a3b36e" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.550037 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.568909 4904 scope.go:117] "RemoveContainer" containerID="5622d2cefca58b2c6dab75a11d7b5ccea91386d592f09ff7a19a6c956bd5071f" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.579786 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.604789 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:26:14 crc kubenswrapper[4904]: E0223 10:26:14.605928 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18bd80b-4e0b-4785-83c9-ede9b77d0a43" containerName="dnsmasq-dns" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.606019 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18bd80b-4e0b-4785-83c9-ede9b77d0a43" containerName="dnsmasq-dns" Feb 23 10:26:14 crc kubenswrapper[4904]: E0223 10:26:14.606094 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f04219-ca0b-4cc3-86c5-67e0ceaf18ad" containerName="ceilometer-notification-agent" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.606157 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f04219-ca0b-4cc3-86c5-67e0ceaf18ad" containerName="ceilometer-notification-agent" Feb 23 10:26:14 crc kubenswrapper[4904]: E0223 10:26:14.606223 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18bd80b-4e0b-4785-83c9-ede9b77d0a43" containerName="init" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.606278 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18bd80b-4e0b-4785-83c9-ede9b77d0a43" containerName="init" Feb 23 10:26:14 crc kubenswrapper[4904]: E0223 10:26:14.606351 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f04219-ca0b-4cc3-86c5-67e0ceaf18ad" containerName="sg-core" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.606404 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f04219-ca0b-4cc3-86c5-67e0ceaf18ad" containerName="sg-core" Feb 23 10:26:14 crc kubenswrapper[4904]: E0223 10:26:14.606478 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f04219-ca0b-4cc3-86c5-67e0ceaf18ad" containerName="proxy-httpd" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.606536 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f04219-ca0b-4cc3-86c5-67e0ceaf18ad" containerName="proxy-httpd" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.606830 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="33f04219-ca0b-4cc3-86c5-67e0ceaf18ad" containerName="ceilometer-notification-agent" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.606906 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18bd80b-4e0b-4785-83c9-ede9b77d0a43" containerName="dnsmasq-dns" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.606969 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="33f04219-ca0b-4cc3-86c5-67e0ceaf18ad" containerName="proxy-httpd" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.607034 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="33f04219-ca0b-4cc3-86c5-67e0ceaf18ad" containerName="sg-core" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.611210 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.616666 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.618708 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.625069 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.651969 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.725883 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.805219 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-run-httpd\") pod \"ceilometer-0\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " pod="openstack/ceilometer-0" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.805277 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h444n\" (UniqueName: \"kubernetes.io/projected/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-kube-api-access-h444n\") pod \"ceilometer-0\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " pod="openstack/ceilometer-0" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.805610 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " pod="openstack/ceilometer-0" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.805726 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-config-data\") pod \"ceilometer-0\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " pod="openstack/ceilometer-0" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.805807 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-log-httpd\") pod \"ceilometer-0\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " pod="openstack/ceilometer-0" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.805853 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-scripts\") pod \"ceilometer-0\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " pod="openstack/ceilometer-0" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.805881 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " pod="openstack/ceilometer-0" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.909457 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-config-data\") pod \"ceilometer-0\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " pod="openstack/ceilometer-0" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.909638 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-log-httpd\") pod \"ceilometer-0\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " pod="openstack/ceilometer-0" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.909757 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-scripts\") pod \"ceilometer-0\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " pod="openstack/ceilometer-0" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.909809 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " pod="openstack/ceilometer-0" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.909868 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-run-httpd\") pod \"ceilometer-0\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " pod="openstack/ceilometer-0" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.909905 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h444n\" (UniqueName: \"kubernetes.io/projected/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-kube-api-access-h444n\") pod \"ceilometer-0\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " pod="openstack/ceilometer-0" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.909942 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " pod="openstack/ceilometer-0" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.910175 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-log-httpd\") pod \"ceilometer-0\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " pod="openstack/ceilometer-0" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.910295 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-run-httpd\") pod \"ceilometer-0\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " pod="openstack/ceilometer-0" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.930808 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " pod="openstack/ceilometer-0" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.930992 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-config-data\") pod \"ceilometer-0\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " pod="openstack/ceilometer-0" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.931550 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-scripts\") pod \"ceilometer-0\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " pod="openstack/ceilometer-0" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.931655 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h444n\" (UniqueName: \"kubernetes.io/projected/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-kube-api-access-h444n\") pod \"ceilometer-0\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " pod="openstack/ceilometer-0" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.932315 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " pod="openstack/ceilometer-0" Feb 23 10:26:14 crc kubenswrapper[4904]: I0223 10:26:14.972988 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:26:15 crc kubenswrapper[4904]: I0223 10:26:15.272110 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33f04219-ca0b-4cc3-86c5-67e0ceaf18ad" path="/var/lib/kubelet/pods/33f04219-ca0b-4cc3-86c5-67e0ceaf18ad/volumes" Feb 23 10:26:15 crc kubenswrapper[4904]: I0223 10:26:15.273216 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f18bd80b-4e0b-4785-83c9-ede9b77d0a43" path="/var/lib/kubelet/pods/f18bd80b-4e0b-4785-83c9-ede9b77d0a43/volumes" Feb 23 10:26:15 crc kubenswrapper[4904]: I0223 10:26:15.391440 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56bb764cb4-74prt" event={"ID":"3a81e676-6e99-4cf5-88a8-78d0bb36f896","Type":"ContainerStarted","Data":"ed5d6c40d68d080db63822758b978fa3da26e59cffcee7ecdaa3ecc76884e26e"} Feb 23 10:26:15 crc kubenswrapper[4904]: I0223 10:26:15.391911 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:15 crc kubenswrapper[4904]: I0223 10:26:15.391988 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:15 crc kubenswrapper[4904]: I0223 10:26:15.407528 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c9930fc9-ddbc-4453-a790-4adba475fc22","Type":"ContainerStarted","Data":"ec3d9e8380cb9222069e5c2d98859ac8d321965e960e62e7f776cc6a995fd453"} Feb 23 10:26:15 crc kubenswrapper[4904]: I0223 10:26:15.407693 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c9930fc9-ddbc-4453-a790-4adba475fc22" containerName="cinder-api-log" containerID="cri-o://9991ec1fa9fda16a1e97b8d4cd3daef6ec62a844749692612863a9be887b2c7d" gracePeriod=30 Feb 23 10:26:15 crc kubenswrapper[4904]: I0223 10:26:15.407908 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 23 10:26:15 crc kubenswrapper[4904]: I0223 10:26:15.407951 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c9930fc9-ddbc-4453-a790-4adba475fc22" containerName="cinder-api" containerID="cri-o://ec3d9e8380cb9222069e5c2d98859ac8d321965e960e62e7f776cc6a995fd453" gracePeriod=30 Feb 23 10:26:15 crc kubenswrapper[4904]: I0223 10:26:15.431327 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-56bb764cb4-74prt" podStartSLOduration=4.4313070230000005 podStartE2EDuration="4.431307023s" podCreationTimestamp="2026-02-23 10:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:26:15.413581589 +0000 UTC m=+1208.833955102" watchObservedRunningTime="2026-02-23 10:26:15.431307023 +0000 UTC m=+1208.851680526" Feb 23 10:26:15 crc kubenswrapper[4904]: I0223 10:26:15.440664 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c29372e9-1b19-4d4d-af56-67620b1ae385","Type":"ContainerStarted","Data":"81facbf2c22bf2039132b112d28a19dd8a651fda4a9540b6c3386cb5dc0e772e"} Feb 23 10:26:15 crc kubenswrapper[4904]: I0223 10:26:15.471510 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.598799821 podStartE2EDuration="8.471490375s" podCreationTimestamp="2026-02-23 10:26:07 +0000 UTC" firstStartedPulling="2026-02-23 10:26:10.479406406 +0000 UTC m=+1203.899779929" lastFinishedPulling="2026-02-23 10:26:12.35209698 +0000 UTC m=+1205.772470483" observedRunningTime="2026-02-23 10:26:15.467179473 +0000 UTC m=+1208.887552986" watchObservedRunningTime="2026-02-23 10:26:15.471490375 +0000 UTC m=+1208.891863888" Feb 23 10:26:15 crc kubenswrapper[4904]: I0223 10:26:15.482123 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=8.482107227 podStartE2EDuration="8.482107227s" podCreationTimestamp="2026-02-23 10:26:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:26:15.444988342 +0000 UTC m=+1208.865361875" watchObservedRunningTime="2026-02-23 10:26:15.482107227 +0000 UTC m=+1208.902480740" Feb 23 10:26:15 crc kubenswrapper[4904]: I0223 10:26:15.527221 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 10:26:15 crc kubenswrapper[4904]: I0223 10:26:15.534419 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.166160 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.341843 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c9930fc9-ddbc-4453-a790-4adba475fc22-etc-machine-id\") pod \"c9930fc9-ddbc-4453-a790-4adba475fc22\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.341944 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9930fc9-ddbc-4453-a790-4adba475fc22-logs\") pod \"c9930fc9-ddbc-4453-a790-4adba475fc22\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.342167 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9930fc9-ddbc-4453-a790-4adba475fc22-config-data\") pod \"c9930fc9-ddbc-4453-a790-4adba475fc22\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.342190 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9930fc9-ddbc-4453-a790-4adba475fc22-scripts\") pod \"c9930fc9-ddbc-4453-a790-4adba475fc22\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.342216 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9930fc9-ddbc-4453-a790-4adba475fc22-combined-ca-bundle\") pod \"c9930fc9-ddbc-4453-a790-4adba475fc22\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.342245 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9930fc9-ddbc-4453-a790-4adba475fc22-config-data-custom\") pod \"c9930fc9-ddbc-4453-a790-4adba475fc22\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.342325 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hbwj\" (UniqueName: \"kubernetes.io/projected/c9930fc9-ddbc-4453-a790-4adba475fc22-kube-api-access-6hbwj\") pod \"c9930fc9-ddbc-4453-a790-4adba475fc22\" (UID: \"c9930fc9-ddbc-4453-a790-4adba475fc22\") " Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.342448 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9930fc9-ddbc-4453-a790-4adba475fc22-logs" (OuterVolumeSpecName: "logs") pod "c9930fc9-ddbc-4453-a790-4adba475fc22" (UID: "c9930fc9-ddbc-4453-a790-4adba475fc22"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.342822 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9930fc9-ddbc-4453-a790-4adba475fc22-logs\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.343117 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9930fc9-ddbc-4453-a790-4adba475fc22-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c9930fc9-ddbc-4453-a790-4adba475fc22" (UID: "c9930fc9-ddbc-4453-a790-4adba475fc22"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.346868 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9930fc9-ddbc-4453-a790-4adba475fc22-scripts" (OuterVolumeSpecName: "scripts") pod "c9930fc9-ddbc-4453-a790-4adba475fc22" (UID: "c9930fc9-ddbc-4453-a790-4adba475fc22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.349810 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9930fc9-ddbc-4453-a790-4adba475fc22-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c9930fc9-ddbc-4453-a790-4adba475fc22" (UID: "c9930fc9-ddbc-4453-a790-4adba475fc22"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.352336 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9930fc9-ddbc-4453-a790-4adba475fc22-kube-api-access-6hbwj" (OuterVolumeSpecName: "kube-api-access-6hbwj") pod "c9930fc9-ddbc-4453-a790-4adba475fc22" (UID: "c9930fc9-ddbc-4453-a790-4adba475fc22"). InnerVolumeSpecName "kube-api-access-6hbwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.378271 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9930fc9-ddbc-4453-a790-4adba475fc22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9930fc9-ddbc-4453-a790-4adba475fc22" (UID: "c9930fc9-ddbc-4453-a790-4adba475fc22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.419525 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9930fc9-ddbc-4453-a790-4adba475fc22-config-data" (OuterVolumeSpecName: "config-data") pod "c9930fc9-ddbc-4453-a790-4adba475fc22" (UID: "c9930fc9-ddbc-4453-a790-4adba475fc22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.444839 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9930fc9-ddbc-4453-a790-4adba475fc22-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.444879 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9930fc9-ddbc-4453-a790-4adba475fc22-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.444895 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9930fc9-ddbc-4453-a790-4adba475fc22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.444909 4904 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9930fc9-ddbc-4453-a790-4adba475fc22-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.444922 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hbwj\" (UniqueName: \"kubernetes.io/projected/c9930fc9-ddbc-4453-a790-4adba475fc22-kube-api-access-6hbwj\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.444933 4904 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c9930fc9-ddbc-4453-a790-4adba475fc22-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.455427 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9730fc29-9b72-4318-9f83-00fc9e8a7dc5","Type":"ContainerStarted","Data":"77881d5a564f0133b20822c5bc81ab16294b70510ae79e7fb54b4c6d0a15be4e"} Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.455485 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9730fc29-9b72-4318-9f83-00fc9e8a7dc5","Type":"ContainerStarted","Data":"5c1d1726c3e67ce35471a2ad92457ca176b9ea40765bd8096ee5d0983279f033"} Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.458056 4904 generic.go:334] "Generic (PLEG): container finished" podID="c9930fc9-ddbc-4453-a790-4adba475fc22" containerID="ec3d9e8380cb9222069e5c2d98859ac8d321965e960e62e7f776cc6a995fd453" exitCode=0 Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.458094 4904 generic.go:334] "Generic (PLEG): container finished" podID="c9930fc9-ddbc-4453-a790-4adba475fc22" containerID="9991ec1fa9fda16a1e97b8d4cd3daef6ec62a844749692612863a9be887b2c7d" exitCode=143 Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.458113 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c9930fc9-ddbc-4453-a790-4adba475fc22","Type":"ContainerDied","Data":"ec3d9e8380cb9222069e5c2d98859ac8d321965e960e62e7f776cc6a995fd453"} Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.458184 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c9930fc9-ddbc-4453-a790-4adba475fc22","Type":"ContainerDied","Data":"9991ec1fa9fda16a1e97b8d4cd3daef6ec62a844749692612863a9be887b2c7d"} Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.458200 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c9930fc9-ddbc-4453-a790-4adba475fc22","Type":"ContainerDied","Data":"21b429d5c9769fdd16ae07dd78e105006321d8988082ab762703ff06b177c0d0"} Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.458224 4904 scope.go:117] "RemoveContainer" containerID="ec3d9e8380cb9222069e5c2d98859ac8d321965e960e62e7f776cc6a995fd453" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.460322 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.486485 4904 scope.go:117] "RemoveContainer" containerID="9991ec1fa9fda16a1e97b8d4cd3daef6ec62a844749692612863a9be887b2c7d" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.536231 4904 scope.go:117] "RemoveContainer" containerID="ec3d9e8380cb9222069e5c2d98859ac8d321965e960e62e7f776cc6a995fd453" Feb 23 10:26:16 crc kubenswrapper[4904]: E0223 10:26:16.537140 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec3d9e8380cb9222069e5c2d98859ac8d321965e960e62e7f776cc6a995fd453\": container with ID starting with ec3d9e8380cb9222069e5c2d98859ac8d321965e960e62e7f776cc6a995fd453 not found: ID does not exist" containerID="ec3d9e8380cb9222069e5c2d98859ac8d321965e960e62e7f776cc6a995fd453" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.537182 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec3d9e8380cb9222069e5c2d98859ac8d321965e960e62e7f776cc6a995fd453"} err="failed to get container status \"ec3d9e8380cb9222069e5c2d98859ac8d321965e960e62e7f776cc6a995fd453\": rpc error: code = NotFound desc = could not find container \"ec3d9e8380cb9222069e5c2d98859ac8d321965e960e62e7f776cc6a995fd453\": container with ID starting with ec3d9e8380cb9222069e5c2d98859ac8d321965e960e62e7f776cc6a995fd453 not found: ID does not exist" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.537212 4904 scope.go:117] "RemoveContainer" containerID="9991ec1fa9fda16a1e97b8d4cd3daef6ec62a844749692612863a9be887b2c7d" Feb 23 10:26:16 crc kubenswrapper[4904]: E0223 10:26:16.537579 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9991ec1fa9fda16a1e97b8d4cd3daef6ec62a844749692612863a9be887b2c7d\": container with ID starting with 9991ec1fa9fda16a1e97b8d4cd3daef6ec62a844749692612863a9be887b2c7d not found: ID does not exist" containerID="9991ec1fa9fda16a1e97b8d4cd3daef6ec62a844749692612863a9be887b2c7d" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.537611 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9991ec1fa9fda16a1e97b8d4cd3daef6ec62a844749692612863a9be887b2c7d"} err="failed to get container status \"9991ec1fa9fda16a1e97b8d4cd3daef6ec62a844749692612863a9be887b2c7d\": rpc error: code = NotFound desc = could not find container \"9991ec1fa9fda16a1e97b8d4cd3daef6ec62a844749692612863a9be887b2c7d\": container with ID starting with 9991ec1fa9fda16a1e97b8d4cd3daef6ec62a844749692612863a9be887b2c7d not found: ID does not exist" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.537630 4904 scope.go:117] "RemoveContainer" containerID="ec3d9e8380cb9222069e5c2d98859ac8d321965e960e62e7f776cc6a995fd453" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.537968 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec3d9e8380cb9222069e5c2d98859ac8d321965e960e62e7f776cc6a995fd453"} err="failed to get container status \"ec3d9e8380cb9222069e5c2d98859ac8d321965e960e62e7f776cc6a995fd453\": rpc error: code = NotFound desc = could not find container \"ec3d9e8380cb9222069e5c2d98859ac8d321965e960e62e7f776cc6a995fd453\": container with ID starting with ec3d9e8380cb9222069e5c2d98859ac8d321965e960e62e7f776cc6a995fd453 not found: ID does not exist" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.537991 4904 scope.go:117] "RemoveContainer" containerID="9991ec1fa9fda16a1e97b8d4cd3daef6ec62a844749692612863a9be887b2c7d" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.538246 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9991ec1fa9fda16a1e97b8d4cd3daef6ec62a844749692612863a9be887b2c7d"} err="failed to get container status \"9991ec1fa9fda16a1e97b8d4cd3daef6ec62a844749692612863a9be887b2c7d\": rpc error: code = NotFound desc = could not find container \"9991ec1fa9fda16a1e97b8d4cd3daef6ec62a844749692612863a9be887b2c7d\": container with ID starting with 9991ec1fa9fda16a1e97b8d4cd3daef6ec62a844749692612863a9be887b2c7d not found: ID does not exist" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.543565 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.554762 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.576167 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 23 10:26:16 crc kubenswrapper[4904]: E0223 10:26:16.576981 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9930fc9-ddbc-4453-a790-4adba475fc22" containerName="cinder-api-log" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.577054 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9930fc9-ddbc-4453-a790-4adba475fc22" containerName="cinder-api-log" Feb 23 10:26:16 crc kubenswrapper[4904]: E0223 10:26:16.577116 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9930fc9-ddbc-4453-a790-4adba475fc22" containerName="cinder-api" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.577186 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9930fc9-ddbc-4453-a790-4adba475fc22" containerName="cinder-api" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.577434 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9930fc9-ddbc-4453-a790-4adba475fc22" containerName="cinder-api-log" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.577515 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9930fc9-ddbc-4453-a790-4adba475fc22" containerName="cinder-api" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.578777 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.581456 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.581691 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.583499 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.592766 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.752488 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe617375-c009-420b-bcad-a5a2a2bae412-logs\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.752566 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe617375-c009-420b-bcad-a5a2a2bae412-config-data\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.752633 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe617375-c009-420b-bcad-a5a2a2bae412-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.752659 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe617375-c009-420b-bcad-a5a2a2bae412-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.752687 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe617375-c009-420b-bcad-a5a2a2bae412-config-data-custom\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.752709 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe617375-c009-420b-bcad-a5a2a2bae412-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.752785 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe617375-c009-420b-bcad-a5a2a2bae412-scripts\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.753901 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe617375-c009-420b-bcad-a5a2a2bae412-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.754104 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77wj6\" (UniqueName: \"kubernetes.io/projected/fe617375-c009-420b-bcad-a5a2a2bae412-kube-api-access-77wj6\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.856304 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe617375-c009-420b-bcad-a5a2a2bae412-config-data\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.856376 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe617375-c009-420b-bcad-a5a2a2bae412-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.856401 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe617375-c009-420b-bcad-a5a2a2bae412-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.856427 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe617375-c009-420b-bcad-a5a2a2bae412-config-data-custom\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.856450 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe617375-c009-420b-bcad-a5a2a2bae412-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.856499 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe617375-c009-420b-bcad-a5a2a2bae412-scripts\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.856548 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe617375-c009-420b-bcad-a5a2a2bae412-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.856580 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77wj6\" (UniqueName: \"kubernetes.io/projected/fe617375-c009-420b-bcad-a5a2a2bae412-kube-api-access-77wj6\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.856628 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe617375-c009-420b-bcad-a5a2a2bae412-logs\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.857138 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe617375-c009-420b-bcad-a5a2a2bae412-logs\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.857691 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe617375-c009-420b-bcad-a5a2a2bae412-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.866413 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe617375-c009-420b-bcad-a5a2a2bae412-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.867501 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe617375-c009-420b-bcad-a5a2a2bae412-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.867982 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe617375-c009-420b-bcad-a5a2a2bae412-config-data-custom\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.868176 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe617375-c009-420b-bcad-a5a2a2bae412-scripts\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.870470 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe617375-c009-420b-bcad-a5a2a2bae412-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.876044 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe617375-c009-420b-bcad-a5a2a2bae412-config-data\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.877192 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77wj6\" (UniqueName: \"kubernetes.io/projected/fe617375-c009-420b-bcad-a5a2a2bae412-kube-api-access-77wj6\") pod \"cinder-api-0\" (UID: \"fe617375-c009-420b-bcad-a5a2a2bae412\") " pod="openstack/cinder-api-0" Feb 23 10:26:16 crc kubenswrapper[4904]: I0223 10:26:16.913954 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 10:26:17 crc kubenswrapper[4904]: I0223 10:26:17.286564 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9930fc9-ddbc-4453-a790-4adba475fc22" path="/var/lib/kubelet/pods/c9930fc9-ddbc-4453-a790-4adba475fc22/volumes" Feb 23 10:26:17 crc kubenswrapper[4904]: I0223 10:26:17.308843 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7cbb478958-6t4v7" Feb 23 10:26:17 crc kubenswrapper[4904]: I0223 10:26:17.359699 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:26:17 crc kubenswrapper[4904]: I0223 10:26:17.393445 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56599cf886-x6z6x"] Feb 23 10:26:17 crc kubenswrapper[4904]: I0223 10:26:17.447736 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 23 10:26:17 crc kubenswrapper[4904]: W0223 10:26:17.454051 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe617375_c009_420b_bcad_a5a2a2bae412.slice/crio-036d0408eb6ebb030e767a243e7c04862b3455867dec1e032041e05f5465f4ff WatchSource:0}: Error finding container 036d0408eb6ebb030e767a243e7c04862b3455867dec1e032041e05f5465f4ff: Status 404 returned error can't find the container with id 036d0408eb6ebb030e767a243e7c04862b3455867dec1e032041e05f5465f4ff Feb 23 10:26:17 crc kubenswrapper[4904]: I0223 10:26:17.477002 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fe617375-c009-420b-bcad-a5a2a2bae412","Type":"ContainerStarted","Data":"036d0408eb6ebb030e767a243e7c04862b3455867dec1e032041e05f5465f4ff"} Feb 23 10:26:17 crc kubenswrapper[4904]: I0223 10:26:17.482190 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9730fc29-9b72-4318-9f83-00fc9e8a7dc5","Type":"ContainerStarted","Data":"650daa17446f499de6eb449baa1e9e6b69c71e97f0630f7012af3372f59974ae"} Feb 23 10:26:17 crc kubenswrapper[4904]: I0223 10:26:17.482642 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-56599cf886-x6z6x" podUID="16c88a53-6a67-457c-9cce-5fd72203ca30" containerName="horizon" containerID="cri-o://e9caa45b5573448893f7c5e50da11c9e9c9b9b0d37fab7d7d8fa006cfc467548" gracePeriod=30 Feb 23 10:26:17 crc kubenswrapper[4904]: I0223 10:26:17.482889 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-56599cf886-x6z6x" podUID="16c88a53-6a67-457c-9cce-5fd72203ca30" containerName="horizon-log" containerID="cri-o://33d0abadfeed8f191cd4e6b8bd96fedb35d9389fe8aa1a5686f8cd4c2edf0bf6" gracePeriod=30 Feb 23 10:26:17 crc kubenswrapper[4904]: I0223 10:26:17.745736 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 23 10:26:18 crc kubenswrapper[4904]: I0223 10:26:18.493866 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fe617375-c009-420b-bcad-a5a2a2bae412","Type":"ContainerStarted","Data":"62a1d6edaf175e73f30d117d1ae359c69cfee48370ae7c04b6b658f8d381f3e8"} Feb 23 10:26:18 crc kubenswrapper[4904]: I0223 10:26:18.496839 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9730fc29-9b72-4318-9f83-00fc9e8a7dc5","Type":"ContainerStarted","Data":"26f7ddcb940fe1851947e658fe7181b5152480c3a13e982c40afad73cbdb5fbf"} Feb 23 10:26:19 crc kubenswrapper[4904]: I0223 10:26:19.510229 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fe617375-c009-420b-bcad-a5a2a2bae412","Type":"ContainerStarted","Data":"631d3a1dfb245a18a336ca3c5dcacff090ed392fd352a268b5f6d5c3abaf73ac"} Feb 23 10:26:19 crc kubenswrapper[4904]: I0223 10:26:19.510613 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 23 10:26:19 crc kubenswrapper[4904]: I0223 10:26:19.535792 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.535768216 podStartE2EDuration="3.535768216s" podCreationTimestamp="2026-02-23 10:26:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:26:19.526887974 +0000 UTC m=+1212.947261487" watchObservedRunningTime="2026-02-23 10:26:19.535768216 +0000 UTC m=+1212.956141729" Feb 23 10:26:19 crc kubenswrapper[4904]: I0223 10:26:19.685759 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:20 crc kubenswrapper[4904]: I0223 10:26:20.003582 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-f6485bd78-lkn6x" Feb 23 10:26:20 crc kubenswrapper[4904]: I0223 10:26:20.095045 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-8f775cf4-7m6bg"] Feb 23 10:26:20 crc kubenswrapper[4904]: I0223 10:26:20.095333 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-8f775cf4-7m6bg" podUID="1d0ec2d6-470d-4122-b5ce-fb9871e5774a" containerName="barbican-api-log" containerID="cri-o://dd2ae5adec58dc1fb365b666bd1b6c84528f2f87dbd738002fea908b48b866c0" gracePeriod=30 Feb 23 10:26:20 crc kubenswrapper[4904]: I0223 10:26:20.095440 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-8f775cf4-7m6bg" podUID="1d0ec2d6-470d-4122-b5ce-fb9871e5774a" containerName="barbican-api" containerID="cri-o://249f89cd6933b705b98cbd7b967fd3b62fead1f4c396e98e0b4e6d5c38cee342" gracePeriod=30 Feb 23 10:26:20 crc kubenswrapper[4904]: I0223 10:26:20.103528 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8f775cf4-7m6bg" podUID="1d0ec2d6-470d-4122-b5ce-fb9871e5774a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": EOF" Feb 23 10:26:20 crc kubenswrapper[4904]: I0223 10:26:20.106204 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8f775cf4-7m6bg" podUID="1d0ec2d6-470d-4122-b5ce-fb9871e5774a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": EOF" Feb 23 10:26:20 crc kubenswrapper[4904]: I0223 10:26:20.106642 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8f775cf4-7m6bg" podUID="1d0ec2d6-470d-4122-b5ce-fb9871e5774a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": EOF" Feb 23 10:26:20 crc kubenswrapper[4904]: I0223 10:26:20.530734 4904 generic.go:334] "Generic (PLEG): container finished" podID="1d0ec2d6-470d-4122-b5ce-fb9871e5774a" containerID="dd2ae5adec58dc1fb365b666bd1b6c84528f2f87dbd738002fea908b48b866c0" exitCode=143 Feb 23 10:26:20 crc kubenswrapper[4904]: I0223 10:26:20.530788 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8f775cf4-7m6bg" event={"ID":"1d0ec2d6-470d-4122-b5ce-fb9871e5774a","Type":"ContainerDied","Data":"dd2ae5adec58dc1fb365b666bd1b6c84528f2f87dbd738002fea908b48b866c0"} Feb 23 10:26:20 crc kubenswrapper[4904]: I0223 10:26:20.533656 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9730fc29-9b72-4318-9f83-00fc9e8a7dc5","Type":"ContainerStarted","Data":"b180e79814c3477943d39a7a5afdb94e895321c51d4a6e22f2734590303e0c20"} Feb 23 10:26:20 crc kubenswrapper[4904]: I0223 10:26:20.555676 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.170245854 podStartE2EDuration="6.555653491s" podCreationTimestamp="2026-02-23 10:26:14 +0000 UTC" firstStartedPulling="2026-02-23 10:26:15.526960872 +0000 UTC m=+1208.947334395" lastFinishedPulling="2026-02-23 10:26:19.912368519 +0000 UTC m=+1213.332742032" observedRunningTime="2026-02-23 10:26:20.553197101 +0000 UTC m=+1213.973570634" watchObservedRunningTime="2026-02-23 10:26:20.555653491 +0000 UTC m=+1213.976027014" Feb 23 10:26:21 crc kubenswrapper[4904]: I0223 10:26:21.546420 4904 generic.go:334] "Generic (PLEG): container finished" podID="16c88a53-6a67-457c-9cce-5fd72203ca30" containerID="e9caa45b5573448893f7c5e50da11c9e9c9b9b0d37fab7d7d8fa006cfc467548" exitCode=0 Feb 23 10:26:21 crc kubenswrapper[4904]: I0223 10:26:21.546632 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56599cf886-x6z6x" event={"ID":"16c88a53-6a67-457c-9cce-5fd72203ca30","Type":"ContainerDied","Data":"e9caa45b5573448893f7c5e50da11c9e9c9b9b0d37fab7d7d8fa006cfc467548"} Feb 23 10:26:21 crc kubenswrapper[4904]: I0223 10:26:21.547780 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 10:26:21 crc kubenswrapper[4904]: I0223 10:26:21.702783 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-56599cf886-x6z6x" podUID="16c88a53-6a67-457c-9cce-5fd72203ca30" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.162:8443: connect: connection refused" Feb 23 10:26:22 crc kubenswrapper[4904]: I0223 10:26:22.721039 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" Feb 23 10:26:22 crc kubenswrapper[4904]: I0223 10:26:22.820782 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-gp5rq"] Feb 23 10:26:22 crc kubenswrapper[4904]: I0223 10:26:22.829216 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" podUID="45fe4c1b-753c-434c-a67a-0022f6109980" containerName="dnsmasq-dns" containerID="cri-o://833e7b57035840f2075b0179b343bcb6037e6b37ca36ee4254140c0297eb3298" gracePeriod=10 Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.290070 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.393147 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.474945 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.548689 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-dns-swift-storage-0\") pod \"45fe4c1b-753c-434c-a67a-0022f6109980\" (UID: \"45fe4c1b-753c-434c-a67a-0022f6109980\") " Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.549114 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmdjq\" (UniqueName: \"kubernetes.io/projected/45fe4c1b-753c-434c-a67a-0022f6109980-kube-api-access-cmdjq\") pod \"45fe4c1b-753c-434c-a67a-0022f6109980\" (UID: \"45fe4c1b-753c-434c-a67a-0022f6109980\") " Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.549293 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-ovsdbserver-sb\") pod \"45fe4c1b-753c-434c-a67a-0022f6109980\" (UID: \"45fe4c1b-753c-434c-a67a-0022f6109980\") " Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.549328 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-dns-svc\") pod \"45fe4c1b-753c-434c-a67a-0022f6109980\" (UID: \"45fe4c1b-753c-434c-a67a-0022f6109980\") " Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.549400 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-config\") pod \"45fe4c1b-753c-434c-a67a-0022f6109980\" (UID: \"45fe4c1b-753c-434c-a67a-0022f6109980\") " Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.549490 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-ovsdbserver-nb\") pod \"45fe4c1b-753c-434c-a67a-0022f6109980\" (UID: \"45fe4c1b-753c-434c-a67a-0022f6109980\") " Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.565022 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45fe4c1b-753c-434c-a67a-0022f6109980-kube-api-access-cmdjq" (OuterVolumeSpecName: "kube-api-access-cmdjq") pod "45fe4c1b-753c-434c-a67a-0022f6109980" (UID: "45fe4c1b-753c-434c-a67a-0022f6109980"). InnerVolumeSpecName "kube-api-access-cmdjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.601671 4904 generic.go:334] "Generic (PLEG): container finished" podID="45fe4c1b-753c-434c-a67a-0022f6109980" containerID="833e7b57035840f2075b0179b343bcb6037e6b37ca36ee4254140c0297eb3298" exitCode=0 Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.601910 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.601938 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c29372e9-1b19-4d4d-af56-67620b1ae385" containerName="cinder-scheduler" containerID="cri-o://40548c3bf23bcc055376a3737e8aeadd7aca30192330cfdafcb17930110e5c22" gracePeriod=30 Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.602016 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" event={"ID":"45fe4c1b-753c-434c-a67a-0022f6109980","Type":"ContainerDied","Data":"833e7b57035840f2075b0179b343bcb6037e6b37ca36ee4254140c0297eb3298"} Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.602049 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" event={"ID":"45fe4c1b-753c-434c-a67a-0022f6109980","Type":"ContainerDied","Data":"28ed4a64f0c89e026faa2b614f18a975b88a29750ae5bdce3c556ea61aa43c27"} Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.602068 4904 scope.go:117] "RemoveContainer" containerID="833e7b57035840f2075b0179b343bcb6037e6b37ca36ee4254140c0297eb3298" Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.602617 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c29372e9-1b19-4d4d-af56-67620b1ae385" containerName="probe" containerID="cri-o://81facbf2c22bf2039132b112d28a19dd8a651fda4a9540b6c3386cb5dc0e772e" gracePeriod=30 Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.646689 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-config" (OuterVolumeSpecName: "config") pod "45fe4c1b-753c-434c-a67a-0022f6109980" (UID: "45fe4c1b-753c-434c-a67a-0022f6109980"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.653446 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmdjq\" (UniqueName: \"kubernetes.io/projected/45fe4c1b-753c-434c-a67a-0022f6109980-kube-api-access-cmdjq\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.653486 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.699576 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "45fe4c1b-753c-434c-a67a-0022f6109980" (UID: "45fe4c1b-753c-434c-a67a-0022f6109980"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.700448 4904 scope.go:117] "RemoveContainer" containerID="dc3c8f98978ecd6b2b362e11c3bf5c774c3ebeb9d349fb648509decdb4787df8" Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.702012 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "45fe4c1b-753c-434c-a67a-0022f6109980" (UID: "45fe4c1b-753c-434c-a67a-0022f6109980"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.708909 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "45fe4c1b-753c-434c-a67a-0022f6109980" (UID: "45fe4c1b-753c-434c-a67a-0022f6109980"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.712233 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "45fe4c1b-753c-434c-a67a-0022f6109980" (UID: "45fe4c1b-753c-434c-a67a-0022f6109980"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.726346 4904 scope.go:117] "RemoveContainer" containerID="833e7b57035840f2075b0179b343bcb6037e6b37ca36ee4254140c0297eb3298" Feb 23 10:26:23 crc kubenswrapper[4904]: E0223 10:26:23.727823 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"833e7b57035840f2075b0179b343bcb6037e6b37ca36ee4254140c0297eb3298\": container with ID starting with 833e7b57035840f2075b0179b343bcb6037e6b37ca36ee4254140c0297eb3298 not found: ID does not exist" containerID="833e7b57035840f2075b0179b343bcb6037e6b37ca36ee4254140c0297eb3298" Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.727855 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"833e7b57035840f2075b0179b343bcb6037e6b37ca36ee4254140c0297eb3298"} err="failed to get container status \"833e7b57035840f2075b0179b343bcb6037e6b37ca36ee4254140c0297eb3298\": rpc error: code = NotFound desc = could not find container \"833e7b57035840f2075b0179b343bcb6037e6b37ca36ee4254140c0297eb3298\": container with ID starting with 833e7b57035840f2075b0179b343bcb6037e6b37ca36ee4254140c0297eb3298 not found: ID does not exist" Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.727881 4904 scope.go:117] "RemoveContainer" containerID="dc3c8f98978ecd6b2b362e11c3bf5c774c3ebeb9d349fb648509decdb4787df8" Feb 23 10:26:23 crc kubenswrapper[4904]: E0223 10:26:23.728685 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc3c8f98978ecd6b2b362e11c3bf5c774c3ebeb9d349fb648509decdb4787df8\": container with ID starting with dc3c8f98978ecd6b2b362e11c3bf5c774c3ebeb9d349fb648509decdb4787df8 not found: ID does not exist" containerID="dc3c8f98978ecd6b2b362e11c3bf5c774c3ebeb9d349fb648509decdb4787df8" Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.728709 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc3c8f98978ecd6b2b362e11c3bf5c774c3ebeb9d349fb648509decdb4787df8"} err="failed to get container status \"dc3c8f98978ecd6b2b362e11c3bf5c774c3ebeb9d349fb648509decdb4787df8\": rpc error: code = NotFound desc = could not find container \"dc3c8f98978ecd6b2b362e11c3bf5c774c3ebeb9d349fb648509decdb4787df8\": container with ID starting with dc3c8f98978ecd6b2b362e11c3bf5c774c3ebeb9d349fb648509decdb4787df8 not found: ID does not exist" Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.755736 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.755796 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.755810 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.755827 4904 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45fe4c1b-753c-434c-a67a-0022f6109980-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.845470 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-84d4456f94-cxsx9" Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.942981 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-gp5rq"] Feb 23 10:26:23 crc kubenswrapper[4904]: I0223 10:26:23.969778 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-gp5rq"] Feb 23 10:26:24 crc kubenswrapper[4904]: I0223 10:26:24.620732 4904 generic.go:334] "Generic (PLEG): container finished" podID="c29372e9-1b19-4d4d-af56-67620b1ae385" containerID="81facbf2c22bf2039132b112d28a19dd8a651fda4a9540b6c3386cb5dc0e772e" exitCode=0 Feb 23 10:26:24 crc kubenswrapper[4904]: I0223 10:26:24.620900 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c29372e9-1b19-4d4d-af56-67620b1ae385","Type":"ContainerDied","Data":"81facbf2c22bf2039132b112d28a19dd8a651fda4a9540b6c3386cb5dc0e772e"} Feb 23 10:26:25 crc kubenswrapper[4904]: I0223 10:26:25.147936 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8f775cf4-7m6bg" podUID="1d0ec2d6-470d-4122-b5ce-fb9871e5774a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 10:26:25 crc kubenswrapper[4904]: I0223 10:26:25.308989 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45fe4c1b-753c-434c-a67a-0022f6109980" path="/var/lib/kubelet/pods/45fe4c1b-753c-434c-a67a-0022f6109980/volumes" Feb 23 10:26:25 crc kubenswrapper[4904]: I0223 10:26:25.527558 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8f775cf4-7m6bg" podUID="1d0ec2d6-470d-4122-b5ce-fb9871e5774a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": read tcp 10.217.0.2:34882->10.217.0.179:9311: read: connection reset by peer" Feb 23 10:26:25 crc kubenswrapper[4904]: I0223 10:26:25.528049 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8f775cf4-7m6bg" podUID="1d0ec2d6-470d-4122-b5ce-fb9871e5774a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": read tcp 10.217.0.2:34892->10.217.0.179:9311: read: connection reset by peer" Feb 23 10:26:25 crc kubenswrapper[4904]: I0223 10:26:25.649348 4904 generic.go:334] "Generic (PLEG): container finished" podID="c29372e9-1b19-4d4d-af56-67620b1ae385" containerID="40548c3bf23bcc055376a3737e8aeadd7aca30192330cfdafcb17930110e5c22" exitCode=0 Feb 23 10:26:25 crc kubenswrapper[4904]: I0223 10:26:25.649446 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c29372e9-1b19-4d4d-af56-67620b1ae385","Type":"ContainerDied","Data":"40548c3bf23bcc055376a3737e8aeadd7aca30192330cfdafcb17930110e5c22"} Feb 23 10:26:25 crc kubenswrapper[4904]: I0223 10:26:25.654213 4904 generic.go:334] "Generic (PLEG): container finished" podID="1d0ec2d6-470d-4122-b5ce-fb9871e5774a" containerID="249f89cd6933b705b98cbd7b967fd3b62fead1f4c396e98e0b4e6d5c38cee342" exitCode=0 Feb 23 10:26:25 crc kubenswrapper[4904]: I0223 10:26:25.654258 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8f775cf4-7m6bg" event={"ID":"1d0ec2d6-470d-4122-b5ce-fb9871e5774a","Type":"ContainerDied","Data":"249f89cd6933b705b98cbd7b967fd3b62fead1f4c396e98e0b4e6d5c38cee342"} Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.120314 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.220600 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8f775cf4-7m6bg" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.232784 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29372e9-1b19-4d4d-af56-67620b1ae385-scripts\") pod \"c29372e9-1b19-4d4d-af56-67620b1ae385\" (UID: \"c29372e9-1b19-4d4d-af56-67620b1ae385\") " Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.232941 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29372e9-1b19-4d4d-af56-67620b1ae385-config-data\") pod \"c29372e9-1b19-4d4d-af56-67620b1ae385\" (UID: \"c29372e9-1b19-4d4d-af56-67620b1ae385\") " Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.233395 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29372e9-1b19-4d4d-af56-67620b1ae385-combined-ca-bundle\") pod \"c29372e9-1b19-4d4d-af56-67620b1ae385\" (UID: \"c29372e9-1b19-4d4d-af56-67620b1ae385\") " Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.233434 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c29372e9-1b19-4d4d-af56-67620b1ae385-config-data-custom\") pod \"c29372e9-1b19-4d4d-af56-67620b1ae385\" (UID: \"c29372e9-1b19-4d4d-af56-67620b1ae385\") " Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.233465 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbhn8\" (UniqueName: \"kubernetes.io/projected/c29372e9-1b19-4d4d-af56-67620b1ae385-kube-api-access-gbhn8\") pod \"c29372e9-1b19-4d4d-af56-67620b1ae385\" (UID: \"c29372e9-1b19-4d4d-af56-67620b1ae385\") " Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.233508 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c29372e9-1b19-4d4d-af56-67620b1ae385-etc-machine-id\") pod \"c29372e9-1b19-4d4d-af56-67620b1ae385\" (UID: \"c29372e9-1b19-4d4d-af56-67620b1ae385\") " Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.233894 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c29372e9-1b19-4d4d-af56-67620b1ae385-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c29372e9-1b19-4d4d-af56-67620b1ae385" (UID: "c29372e9-1b19-4d4d-af56-67620b1ae385"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.234282 4904 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c29372e9-1b19-4d4d-af56-67620b1ae385-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.242161 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29372e9-1b19-4d4d-af56-67620b1ae385-scripts" (OuterVolumeSpecName: "scripts") pod "c29372e9-1b19-4d4d-af56-67620b1ae385" (UID: "c29372e9-1b19-4d4d-af56-67620b1ae385"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.242221 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29372e9-1b19-4d4d-af56-67620b1ae385-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c29372e9-1b19-4d4d-af56-67620b1ae385" (UID: "c29372e9-1b19-4d4d-af56-67620b1ae385"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.246020 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c29372e9-1b19-4d4d-af56-67620b1ae385-kube-api-access-gbhn8" (OuterVolumeSpecName: "kube-api-access-gbhn8") pod "c29372e9-1b19-4d4d-af56-67620b1ae385" (UID: "c29372e9-1b19-4d4d-af56-67620b1ae385"). InnerVolumeSpecName "kube-api-access-gbhn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.323078 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29372e9-1b19-4d4d-af56-67620b1ae385-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c29372e9-1b19-4d4d-af56-67620b1ae385" (UID: "c29372e9-1b19-4d4d-af56-67620b1ae385"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.335756 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-config-data\") pod \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\" (UID: \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\") " Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.335853 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-combined-ca-bundle\") pod \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\" (UID: \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\") " Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.336002 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-config-data-custom\") pod \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\" (UID: \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\") " Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.336052 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-logs\") pod \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\" (UID: \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\") " Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.336151 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4bqv\" (UniqueName: \"kubernetes.io/projected/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-kube-api-access-k4bqv\") pod \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\" (UID: \"1d0ec2d6-470d-4122-b5ce-fb9871e5774a\") " Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.336777 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-logs" (OuterVolumeSpecName: "logs") pod "1d0ec2d6-470d-4122-b5ce-fb9871e5774a" (UID: "1d0ec2d6-470d-4122-b5ce-fb9871e5774a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.337353 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-logs\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.337385 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29372e9-1b19-4d4d-af56-67620b1ae385-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.337406 4904 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c29372e9-1b19-4d4d-af56-67620b1ae385-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.337420 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbhn8\" (UniqueName: \"kubernetes.io/projected/c29372e9-1b19-4d4d-af56-67620b1ae385-kube-api-access-gbhn8\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.337433 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29372e9-1b19-4d4d-af56-67620b1ae385-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.339837 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1d0ec2d6-470d-4122-b5ce-fb9871e5774a" (UID: "1d0ec2d6-470d-4122-b5ce-fb9871e5774a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.342538 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-kube-api-access-k4bqv" (OuterVolumeSpecName: "kube-api-access-k4bqv") pod "1d0ec2d6-470d-4122-b5ce-fb9871e5774a" (UID: "1d0ec2d6-470d-4122-b5ce-fb9871e5774a"). InnerVolumeSpecName "kube-api-access-k4bqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.366261 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d0ec2d6-470d-4122-b5ce-fb9871e5774a" (UID: "1d0ec2d6-470d-4122-b5ce-fb9871e5774a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.386938 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29372e9-1b19-4d4d-af56-67620b1ae385-config-data" (OuterVolumeSpecName: "config-data") pod "c29372e9-1b19-4d4d-af56-67620b1ae385" (UID: "c29372e9-1b19-4d4d-af56-67620b1ae385"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.409441 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-config-data" (OuterVolumeSpecName: "config-data") pod "1d0ec2d6-470d-4122-b5ce-fb9871e5774a" (UID: "1d0ec2d6-470d-4122-b5ce-fb9871e5774a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.443879 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4bqv\" (UniqueName: \"kubernetes.io/projected/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-kube-api-access-k4bqv\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.443931 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29372e9-1b19-4d4d-af56-67620b1ae385-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.443944 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.443955 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.443965 4904 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d0ec2d6-470d-4122-b5ce-fb9871e5774a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.677194 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c29372e9-1b19-4d4d-af56-67620b1ae385","Type":"ContainerDied","Data":"5896b1e5ffc2e0dcff4e99062d79c088954715cff94aa96ade6e36813d5d28ea"} Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.677267 4904 scope.go:117] "RemoveContainer" containerID="81facbf2c22bf2039132b112d28a19dd8a651fda4a9540b6c3386cb5dc0e772e" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.677354 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.683564 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8f775cf4-7m6bg" event={"ID":"1d0ec2d6-470d-4122-b5ce-fb9871e5774a","Type":"ContainerDied","Data":"bb89a25a0b73a9846874e993a2566d79c87ad93f5d02ba0ba16513e5de5979f7"} Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.683684 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8f775cf4-7m6bg" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.767620 4904 scope.go:117] "RemoveContainer" containerID="40548c3bf23bcc055376a3737e8aeadd7aca30192330cfdafcb17930110e5c22" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.799918 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-8f775cf4-7m6bg"] Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.820791 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-8f775cf4-7m6bg"] Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.824391 4904 scope.go:117] "RemoveContainer" containerID="249f89cd6933b705b98cbd7b967fd3b62fead1f4c396e98e0b4e6d5c38cee342" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.841975 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.856424 4904 scope.go:117] "RemoveContainer" containerID="dd2ae5adec58dc1fb365b666bd1b6c84528f2f87dbd738002fea908b48b866c0" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.866405 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.878939 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 10:26:26 crc kubenswrapper[4904]: E0223 10:26:26.879964 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29372e9-1b19-4d4d-af56-67620b1ae385" containerName="probe" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.879988 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29372e9-1b19-4d4d-af56-67620b1ae385" containerName="probe" Feb 23 10:26:26 crc kubenswrapper[4904]: E0223 10:26:26.880005 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d0ec2d6-470d-4122-b5ce-fb9871e5774a" containerName="barbican-api" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.880013 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d0ec2d6-470d-4122-b5ce-fb9871e5774a" containerName="barbican-api" Feb 23 10:26:26 crc kubenswrapper[4904]: E0223 10:26:26.880029 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45fe4c1b-753c-434c-a67a-0022f6109980" containerName="init" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.880041 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="45fe4c1b-753c-434c-a67a-0022f6109980" containerName="init" Feb 23 10:26:26 crc kubenswrapper[4904]: E0223 10:26:26.880069 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29372e9-1b19-4d4d-af56-67620b1ae385" containerName="cinder-scheduler" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.880078 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29372e9-1b19-4d4d-af56-67620b1ae385" containerName="cinder-scheduler" Feb 23 10:26:26 crc kubenswrapper[4904]: E0223 10:26:26.880117 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45fe4c1b-753c-434c-a67a-0022f6109980" containerName="dnsmasq-dns" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.880127 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="45fe4c1b-753c-434c-a67a-0022f6109980" containerName="dnsmasq-dns" Feb 23 10:26:26 crc kubenswrapper[4904]: E0223 10:26:26.880138 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d0ec2d6-470d-4122-b5ce-fb9871e5774a" containerName="barbican-api-log" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.880146 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d0ec2d6-470d-4122-b5ce-fb9871e5774a" containerName="barbican-api-log" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.880455 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="45fe4c1b-753c-434c-a67a-0022f6109980" containerName="dnsmasq-dns" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.880595 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d0ec2d6-470d-4122-b5ce-fb9871e5774a" containerName="barbican-api" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.880617 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d0ec2d6-470d-4122-b5ce-fb9871e5774a" containerName="barbican-api-log" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.880629 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29372e9-1b19-4d4d-af56-67620b1ae385" containerName="probe" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.880642 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29372e9-1b19-4d4d-af56-67620b1ae385" containerName="cinder-scheduler" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.882457 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.887921 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.888172 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.968099 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf8nc\" (UniqueName: \"kubernetes.io/projected/f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1-kube-api-access-zf8nc\") pod \"cinder-scheduler-0\" (UID: \"f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.968151 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.968177 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1-scripts\") pod \"cinder-scheduler-0\" (UID: \"f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.968588 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1-config-data\") pod \"cinder-scheduler-0\" (UID: \"f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.968824 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:26 crc kubenswrapper[4904]: I0223 10:26:26.968891 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.071626 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf8nc\" (UniqueName: \"kubernetes.io/projected/f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1-kube-api-access-zf8nc\") pod \"cinder-scheduler-0\" (UID: \"f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.071682 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.071710 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1-scripts\") pod \"cinder-scheduler-0\" (UID: \"f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.071804 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1-config-data\") pod \"cinder-scheduler-0\" (UID: \"f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.071852 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.071874 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.071980 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.076739 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.077745 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.092174 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1-config-data\") pod \"cinder-scheduler-0\" (UID: \"f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.096098 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1-scripts\") pod \"cinder-scheduler-0\" (UID: \"f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.102782 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf8nc\" (UniqueName: \"kubernetes.io/projected/f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1-kube-api-access-zf8nc\") pod \"cinder-scheduler-0\" (UID: \"f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1\") " pod="openstack/cinder-scheduler-0" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.203614 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.357672 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d0ec2d6-470d-4122-b5ce-fb9871e5774a" path="/var/lib/kubelet/pods/1d0ec2d6-470d-4122-b5ce-fb9871e5774a/volumes" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.362067 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c29372e9-1b19-4d4d-af56-67620b1ae385" path="/var/lib/kubelet/pods/c29372e9-1b19-4d4d-af56-67620b1ae385/volumes" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.548712 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.550776 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.563521 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-7r2kp" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.563638 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.563909 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.589330 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.667726 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q2d7\" (UniqueName: \"kubernetes.io/projected/f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8-kube-api-access-8q2d7\") pod \"openstackclient\" (UID: \"f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8\") " pod="openstack/openstackclient" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.667823 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8-openstack-config-secret\") pod \"openstackclient\" (UID: \"f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8\") " pod="openstack/openstackclient" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.667855 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8-openstack-config\") pod \"openstackclient\" (UID: \"f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8\") " pod="openstack/openstackclient" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.668007 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8\") " pod="openstack/openstackclient" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.711155 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.769468 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q2d7\" (UniqueName: \"kubernetes.io/projected/f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8-kube-api-access-8q2d7\") pod \"openstackclient\" (UID: \"f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8\") " pod="openstack/openstackclient" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.769854 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8-openstack-config-secret\") pod \"openstackclient\" (UID: \"f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8\") " pod="openstack/openstackclient" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.769886 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8-openstack-config\") pod \"openstackclient\" (UID: \"f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8\") " pod="openstack/openstackclient" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.769943 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8\") " pod="openstack/openstackclient" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.776139 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8\") " pod="openstack/openstackclient" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.779107 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8-openstack-config-secret\") pod \"openstackclient\" (UID: \"f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8\") " pod="openstack/openstackclient" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.785099 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8-openstack-config\") pod \"openstackclient\" (UID: \"f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8\") " pod="openstack/openstackclient" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.794377 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q2d7\" (UniqueName: \"kubernetes.io/projected/f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8-kube-api-access-8q2d7\") pod \"openstackclient\" (UID: \"f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8\") " pod="openstack/openstackclient" Feb 23 10:26:27 crc kubenswrapper[4904]: I0223 10:26:27.889225 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 23 10:26:28 crc kubenswrapper[4904]: I0223 10:26:28.331308 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-gp5rq" podUID="45fe4c1b-753c-434c-a67a-0022f6109980" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.166:5353: i/o timeout" Feb 23 10:26:28 crc kubenswrapper[4904]: I0223 10:26:28.559459 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 23 10:26:28 crc kubenswrapper[4904]: I0223 10:26:28.760526 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1","Type":"ContainerStarted","Data":"0cefbd1bb42fabddfac7e4d8e1fb3cf622886b864a6e079cf669cb921ae613b4"} Feb 23 10:26:28 crc kubenswrapper[4904]: I0223 10:26:28.760794 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1","Type":"ContainerStarted","Data":"948191e6a99d70f0b6668b4cc40be923a6921e2257f280e29e13d0e9c66292f4"} Feb 23 10:26:28 crc kubenswrapper[4904]: I0223 10:26:28.777910 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8","Type":"ContainerStarted","Data":"7cf12bf452fd7fc610cf5ec5e86b7dc02a66324bdc30dce98a587e4c127289ff"} Feb 23 10:26:29 crc kubenswrapper[4904]: I0223 10:26:29.183663 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 23 10:26:29 crc kubenswrapper[4904]: I0223 10:26:29.792773 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1","Type":"ContainerStarted","Data":"95451ee0df8d42b2b0ae8a1e9b395bd0d108d70612af8729924475d9365f5796"} Feb 23 10:26:29 crc kubenswrapper[4904]: I0223 10:26:29.822088 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.822065502 podStartE2EDuration="3.822065502s" podCreationTimestamp="2026-02-23 10:26:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:26:29.809797023 +0000 UTC m=+1223.230170536" watchObservedRunningTime="2026-02-23 10:26:29.822065502 +0000 UTC m=+1223.242439015" Feb 23 10:26:31 crc kubenswrapper[4904]: I0223 10:26:31.701098 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-56599cf886-x6z6x" podUID="16c88a53-6a67-457c-9cce-5fd72203ca30" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.162:8443: connect: connection refused" Feb 23 10:26:32 crc kubenswrapper[4904]: I0223 10:26:32.204402 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 23 10:26:32 crc kubenswrapper[4904]: I0223 10:26:32.917845 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-d55dc77cc-gg7pn"] Feb 23 10:26:32 crc kubenswrapper[4904]: I0223 10:26:32.919918 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:32 crc kubenswrapper[4904]: I0223 10:26:32.943186 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 23 10:26:32 crc kubenswrapper[4904]: I0223 10:26:32.943212 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 23 10:26:32 crc kubenswrapper[4904]: I0223 10:26:32.943186 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 23 10:26:32 crc kubenswrapper[4904]: I0223 10:26:32.951104 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-d55dc77cc-gg7pn"] Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.023145 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f-combined-ca-bundle\") pod \"swift-proxy-d55dc77cc-gg7pn\" (UID: \"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f\") " pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.023268 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f-etc-swift\") pod \"swift-proxy-d55dc77cc-gg7pn\" (UID: \"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f\") " pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.023342 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f-config-data\") pod \"swift-proxy-d55dc77cc-gg7pn\" (UID: \"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f\") " pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.023378 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f-internal-tls-certs\") pod \"swift-proxy-d55dc77cc-gg7pn\" (UID: \"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f\") " pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.023403 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gbwb\" (UniqueName: \"kubernetes.io/projected/7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f-kube-api-access-4gbwb\") pod \"swift-proxy-d55dc77cc-gg7pn\" (UID: \"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f\") " pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.023488 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f-public-tls-certs\") pod \"swift-proxy-d55dc77cc-gg7pn\" (UID: \"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f\") " pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.023565 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f-run-httpd\") pod \"swift-proxy-d55dc77cc-gg7pn\" (UID: \"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f\") " pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.023625 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f-log-httpd\") pod \"swift-proxy-d55dc77cc-gg7pn\" (UID: \"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f\") " pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.126206 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f-combined-ca-bundle\") pod \"swift-proxy-d55dc77cc-gg7pn\" (UID: \"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f\") " pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.126300 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f-etc-swift\") pod \"swift-proxy-d55dc77cc-gg7pn\" (UID: \"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f\") " pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.126367 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f-config-data\") pod \"swift-proxy-d55dc77cc-gg7pn\" (UID: \"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f\") " pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.126385 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f-internal-tls-certs\") pod \"swift-proxy-d55dc77cc-gg7pn\" (UID: \"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f\") " pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.127350 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gbwb\" (UniqueName: \"kubernetes.io/projected/7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f-kube-api-access-4gbwb\") pod \"swift-proxy-d55dc77cc-gg7pn\" (UID: \"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f\") " pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.127435 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f-public-tls-certs\") pod \"swift-proxy-d55dc77cc-gg7pn\" (UID: \"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f\") " pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.127511 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f-run-httpd\") pod \"swift-proxy-d55dc77cc-gg7pn\" (UID: \"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f\") " pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.127577 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f-log-httpd\") pod \"swift-proxy-d55dc77cc-gg7pn\" (UID: \"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f\") " pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.128140 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f-log-httpd\") pod \"swift-proxy-d55dc77cc-gg7pn\" (UID: \"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f\") " pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.129592 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f-run-httpd\") pod \"swift-proxy-d55dc77cc-gg7pn\" (UID: \"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f\") " pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.136759 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f-internal-tls-certs\") pod \"swift-proxy-d55dc77cc-gg7pn\" (UID: \"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f\") " pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.137123 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f-config-data\") pod \"swift-proxy-d55dc77cc-gg7pn\" (UID: \"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f\") " pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.137613 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f-public-tls-certs\") pod \"swift-proxy-d55dc77cc-gg7pn\" (UID: \"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f\") " pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.139685 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f-etc-swift\") pod \"swift-proxy-d55dc77cc-gg7pn\" (UID: \"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f\") " pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.141444 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f-combined-ca-bundle\") pod \"swift-proxy-d55dc77cc-gg7pn\" (UID: \"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f\") " pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.149628 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gbwb\" (UniqueName: \"kubernetes.io/projected/7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f-kube-api-access-4gbwb\") pod \"swift-proxy-d55dc77cc-gg7pn\" (UID: \"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f\") " pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.238699 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:33 crc kubenswrapper[4904]: I0223 10:26:33.945847 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-d55dc77cc-gg7pn"] Feb 23 10:26:34 crc kubenswrapper[4904]: I0223 10:26:34.276321 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:26:34 crc kubenswrapper[4904]: I0223 10:26:34.276645 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9730fc29-9b72-4318-9f83-00fc9e8a7dc5" containerName="ceilometer-central-agent" containerID="cri-o://77881d5a564f0133b20822c5bc81ab16294b70510ae79e7fb54b4c6d0a15be4e" gracePeriod=30 Feb 23 10:26:34 crc kubenswrapper[4904]: I0223 10:26:34.276760 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9730fc29-9b72-4318-9f83-00fc9e8a7dc5" containerName="sg-core" containerID="cri-o://26f7ddcb940fe1851947e658fe7181b5152480c3a13e982c40afad73cbdb5fbf" gracePeriod=30 Feb 23 10:26:34 crc kubenswrapper[4904]: I0223 10:26:34.276836 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9730fc29-9b72-4318-9f83-00fc9e8a7dc5" containerName="proxy-httpd" containerID="cri-o://b180e79814c3477943d39a7a5afdb94e895321c51d4a6e22f2734590303e0c20" gracePeriod=30 Feb 23 10:26:34 crc kubenswrapper[4904]: I0223 10:26:34.276843 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9730fc29-9b72-4318-9f83-00fc9e8a7dc5" containerName="ceilometer-notification-agent" containerID="cri-o://650daa17446f499de6eb449baa1e9e6b69c71e97f0630f7012af3372f59974ae" gracePeriod=30 Feb 23 10:26:34 crc kubenswrapper[4904]: I0223 10:26:34.298095 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9730fc29-9b72-4318-9f83-00fc9e8a7dc5" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.186:3000/\": EOF" Feb 23 10:26:34 crc kubenswrapper[4904]: I0223 10:26:34.398315 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5c456b7c45-bb96t" Feb 23 10:26:34 crc kubenswrapper[4904]: I0223 10:26:34.494269 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8656f88b4-fzcg6"] Feb 23 10:26:34 crc kubenswrapper[4904]: I0223 10:26:34.494794 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8656f88b4-fzcg6" podUID="e3b57aa5-707b-41f7-af28-34a33cb8e84e" containerName="neutron-api" containerID="cri-o://c21d2acd45dd7a1af73b2aa905dfb559e04cd7f63220ef45099a657fadfc5fb8" gracePeriod=30 Feb 23 10:26:34 crc kubenswrapper[4904]: I0223 10:26:34.494950 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8656f88b4-fzcg6" podUID="e3b57aa5-707b-41f7-af28-34a33cb8e84e" containerName="neutron-httpd" containerID="cri-o://6661758c00cc5a88a75673faa655d47686b45d837e898dc85caf3fb0bce6dd4a" gracePeriod=30 Feb 23 10:26:34 crc kubenswrapper[4904]: I0223 10:26:34.876067 4904 generic.go:334] "Generic (PLEG): container finished" podID="9730fc29-9b72-4318-9f83-00fc9e8a7dc5" containerID="b180e79814c3477943d39a7a5afdb94e895321c51d4a6e22f2734590303e0c20" exitCode=0 Feb 23 10:26:34 crc kubenswrapper[4904]: I0223 10:26:34.876109 4904 generic.go:334] "Generic (PLEG): container finished" podID="9730fc29-9b72-4318-9f83-00fc9e8a7dc5" containerID="26f7ddcb940fe1851947e658fe7181b5152480c3a13e982c40afad73cbdb5fbf" exitCode=2 Feb 23 10:26:34 crc kubenswrapper[4904]: I0223 10:26:34.876121 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9730fc29-9b72-4318-9f83-00fc9e8a7dc5","Type":"ContainerDied","Data":"b180e79814c3477943d39a7a5afdb94e895321c51d4a6e22f2734590303e0c20"} Feb 23 10:26:34 crc kubenswrapper[4904]: I0223 10:26:34.876187 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9730fc29-9b72-4318-9f83-00fc9e8a7dc5","Type":"ContainerDied","Data":"26f7ddcb940fe1851947e658fe7181b5152480c3a13e982c40afad73cbdb5fbf"} Feb 23 10:26:34 crc kubenswrapper[4904]: I0223 10:26:34.996778 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 10:26:34 crc kubenswrapper[4904]: I0223 10:26:34.997160 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0819d4e1-7204-41b9-80c0-1b8e86fb211d" containerName="glance-log" containerID="cri-o://25dd08c1c8d066cb437b1d6dbd7839292b3a914bd9976f76934fefff8f9410c0" gracePeriod=30 Feb 23 10:26:34 crc kubenswrapper[4904]: I0223 10:26:34.997439 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0819d4e1-7204-41b9-80c0-1b8e86fb211d" containerName="glance-httpd" containerID="cri-o://7b9cb4add431cbc249a603e2bfa5389a0f4ba87ef27fb2bd3c811455cca5baf1" gracePeriod=30 Feb 23 10:26:35 crc kubenswrapper[4904]: I0223 10:26:35.892184 4904 generic.go:334] "Generic (PLEG): container finished" podID="0819d4e1-7204-41b9-80c0-1b8e86fb211d" containerID="25dd08c1c8d066cb437b1d6dbd7839292b3a914bd9976f76934fefff8f9410c0" exitCode=143 Feb 23 10:26:35 crc kubenswrapper[4904]: I0223 10:26:35.892285 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0819d4e1-7204-41b9-80c0-1b8e86fb211d","Type":"ContainerDied","Data":"25dd08c1c8d066cb437b1d6dbd7839292b3a914bd9976f76934fefff8f9410c0"} Feb 23 10:26:35 crc kubenswrapper[4904]: I0223 10:26:35.897311 4904 generic.go:334] "Generic (PLEG): container finished" podID="9730fc29-9b72-4318-9f83-00fc9e8a7dc5" containerID="77881d5a564f0133b20822c5bc81ab16294b70510ae79e7fb54b4c6d0a15be4e" exitCode=0 Feb 23 10:26:35 crc kubenswrapper[4904]: I0223 10:26:35.897375 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9730fc29-9b72-4318-9f83-00fc9e8a7dc5","Type":"ContainerDied","Data":"77881d5a564f0133b20822c5bc81ab16294b70510ae79e7fb54b4c6d0a15be4e"} Feb 23 10:26:35 crc kubenswrapper[4904]: I0223 10:26:35.901689 4904 generic.go:334] "Generic (PLEG): container finished" podID="e3b57aa5-707b-41f7-af28-34a33cb8e84e" containerID="6661758c00cc5a88a75673faa655d47686b45d837e898dc85caf3fb0bce6dd4a" exitCode=0 Feb 23 10:26:35 crc kubenswrapper[4904]: I0223 10:26:35.901769 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8656f88b4-fzcg6" event={"ID":"e3b57aa5-707b-41f7-af28-34a33cb8e84e","Type":"ContainerDied","Data":"6661758c00cc5a88a75673faa655d47686b45d837e898dc85caf3fb0bce6dd4a"} Feb 23 10:26:37 crc kubenswrapper[4904]: I0223 10:26:37.452476 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 23 10:26:38 crc kubenswrapper[4904]: I0223 10:26:38.539921 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="0819d4e1-7204-41b9-80c0-1b8e86fb211d" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.169:9292/healthcheck\": dial tcp 10.217.0.169:9292: connect: connection refused" Feb 23 10:26:38 crc kubenswrapper[4904]: I0223 10:26:38.539925 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="0819d4e1-7204-41b9-80c0-1b8e86fb211d" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.169:9292/healthcheck\": dial tcp 10.217.0.169:9292: connect: connection refused" Feb 23 10:26:38 crc kubenswrapper[4904]: I0223 10:26:38.952464 4904 generic.go:334] "Generic (PLEG): container finished" podID="0819d4e1-7204-41b9-80c0-1b8e86fb211d" containerID="7b9cb4add431cbc249a603e2bfa5389a0f4ba87ef27fb2bd3c811455cca5baf1" exitCode=0 Feb 23 10:26:38 crc kubenswrapper[4904]: I0223 10:26:38.952519 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0819d4e1-7204-41b9-80c0-1b8e86fb211d","Type":"ContainerDied","Data":"7b9cb4add431cbc249a603e2bfa5389a0f4ba87ef27fb2bd3c811455cca5baf1"} Feb 23 10:26:39 crc kubenswrapper[4904]: I0223 10:26:39.857859 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 10:26:39 crc kubenswrapper[4904]: I0223 10:26:39.858184 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="370d441a-a231-46cd-b528-1a80d8c593bc" containerName="glance-log" containerID="cri-o://200469776c22b49cc82dd7c4a0d10e483f91f1691c5b73b75cc5db23de4b1d23" gracePeriod=30 Feb 23 10:26:39 crc kubenswrapper[4904]: I0223 10:26:39.858221 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="370d441a-a231-46cd-b528-1a80d8c593bc" containerName="glance-httpd" containerID="cri-o://2922040b7dd2ecaecc651fed679cf4ddac18ae6910f604fffabf2eb10a01f30a" gracePeriod=30 Feb 23 10:26:39 crc kubenswrapper[4904]: I0223 10:26:39.966662 4904 generic.go:334] "Generic (PLEG): container finished" podID="9730fc29-9b72-4318-9f83-00fc9e8a7dc5" containerID="650daa17446f499de6eb449baa1e9e6b69c71e97f0630f7012af3372f59974ae" exitCode=0 Feb 23 10:26:39 crc kubenswrapper[4904]: I0223 10:26:39.966765 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9730fc29-9b72-4318-9f83-00fc9e8a7dc5","Type":"ContainerDied","Data":"650daa17446f499de6eb449baa1e9e6b69c71e97f0630f7012af3372f59974ae"} Feb 23 10:26:40 crc kubenswrapper[4904]: I0223 10:26:40.984012 4904 generic.go:334] "Generic (PLEG): container finished" podID="370d441a-a231-46cd-b528-1a80d8c593bc" containerID="200469776c22b49cc82dd7c4a0d10e483f91f1691c5b73b75cc5db23de4b1d23" exitCode=143 Feb 23 10:26:40 crc kubenswrapper[4904]: I0223 10:26:40.984384 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"370d441a-a231-46cd-b528-1a80d8c593bc","Type":"ContainerDied","Data":"200469776c22b49cc82dd7c4a0d10e483f91f1691c5b73b75cc5db23de4b1d23"} Feb 23 10:26:40 crc kubenswrapper[4904]: I0223 10:26:40.991151 4904 generic.go:334] "Generic (PLEG): container finished" podID="e3b57aa5-707b-41f7-af28-34a33cb8e84e" containerID="c21d2acd45dd7a1af73b2aa905dfb559e04cd7f63220ef45099a657fadfc5fb8" exitCode=0 Feb 23 10:26:40 crc kubenswrapper[4904]: I0223 10:26:40.991216 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8656f88b4-fzcg6" event={"ID":"e3b57aa5-707b-41f7-af28-34a33cb8e84e","Type":"ContainerDied","Data":"c21d2acd45dd7a1af73b2aa905dfb559e04cd7f63220ef45099a657fadfc5fb8"} Feb 23 10:26:41 crc kubenswrapper[4904]: W0223 10:26:41.481448 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b9863a0_ae98_4daf_8cfe_eaaeb5f3d62f.slice/crio-3157c49cc01770c1d15a9c1e75d40bd2ca5e6a06477d31faa8f560b097c8e2d7 WatchSource:0}: Error finding container 3157c49cc01770c1d15a9c1e75d40bd2ca5e6a06477d31faa8f560b097c8e2d7: Status 404 returned error can't find the container with id 3157c49cc01770c1d15a9c1e75d40bd2ca5e6a06477d31faa8f560b097c8e2d7 Feb 23 10:26:41 crc kubenswrapper[4904]: I0223 10:26:41.701483 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-56599cf886-x6z6x" podUID="16c88a53-6a67-457c-9cce-5fd72203ca30" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.162:8443: connect: connection refused" Feb 23 10:26:41 crc kubenswrapper[4904]: I0223 10:26:41.708556 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.006979 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.010764 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0819d4e1-7204-41b9-80c0-1b8e86fb211d","Type":"ContainerDied","Data":"e702e90d8e5192a565dff24a08b78ad06058bda0c46781f08145b2c126932806"} Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.010854 4904 scope.go:117] "RemoveContainer" containerID="7b9cb4add431cbc249a603e2bfa5389a0f4ba87ef27fb2bd3c811455cca5baf1" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.017224 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-d55dc77cc-gg7pn" event={"ID":"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f","Type":"ContainerStarted","Data":"f47ecb4527aab8a4009a95521bc6520b710480c7bd181ff659564d2b743ea76a"} Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.017294 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-d55dc77cc-gg7pn" event={"ID":"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f","Type":"ContainerStarted","Data":"3157c49cc01770c1d15a9c1e75d40bd2ca5e6a06477d31faa8f560b097c8e2d7"} Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.075747 4904 scope.go:117] "RemoveContainer" containerID="25dd08c1c8d066cb437b1d6dbd7839292b3a914bd9976f76934fefff8f9410c0" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.101589 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.101709 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0819d4e1-7204-41b9-80c0-1b8e86fb211d-internal-tls-certs\") pod \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.101778 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0819d4e1-7204-41b9-80c0-1b8e86fb211d-scripts\") pod \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.101808 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0819d4e1-7204-41b9-80c0-1b8e86fb211d-combined-ca-bundle\") pod \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.101946 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0819d4e1-7204-41b9-80c0-1b8e86fb211d-logs\") pod \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.101980 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh475\" (UniqueName: \"kubernetes.io/projected/0819d4e1-7204-41b9-80c0-1b8e86fb211d-kube-api-access-hh475\") pod \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.102105 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0819d4e1-7204-41b9-80c0-1b8e86fb211d-httpd-run\") pod \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.102218 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0819d4e1-7204-41b9-80c0-1b8e86fb211d-config-data\") pod \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\" (UID: \"0819d4e1-7204-41b9-80c0-1b8e86fb211d\") " Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.106695 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0819d4e1-7204-41b9-80c0-1b8e86fb211d-logs" (OuterVolumeSpecName: "logs") pod "0819d4e1-7204-41b9-80c0-1b8e86fb211d" (UID: "0819d4e1-7204-41b9-80c0-1b8e86fb211d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.107234 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0819d4e1-7204-41b9-80c0-1b8e86fb211d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0819d4e1-7204-41b9-80c0-1b8e86fb211d" (UID: "0819d4e1-7204-41b9-80c0-1b8e86fb211d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.118001 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0819d4e1-7204-41b9-80c0-1b8e86fb211d-kube-api-access-hh475" (OuterVolumeSpecName: "kube-api-access-hh475") pod "0819d4e1-7204-41b9-80c0-1b8e86fb211d" (UID: "0819d4e1-7204-41b9-80c0-1b8e86fb211d"). InnerVolumeSpecName "kube-api-access-hh475". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.122961 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0819d4e1-7204-41b9-80c0-1b8e86fb211d-scripts" (OuterVolumeSpecName: "scripts") pod "0819d4e1-7204-41b9-80c0-1b8e86fb211d" (UID: "0819d4e1-7204-41b9-80c0-1b8e86fb211d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.161978 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "0819d4e1-7204-41b9-80c0-1b8e86fb211d" (UID: "0819d4e1-7204-41b9-80c0-1b8e86fb211d"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.204582 4904 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.204620 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0819d4e1-7204-41b9-80c0-1b8e86fb211d-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.204630 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0819d4e1-7204-41b9-80c0-1b8e86fb211d-logs\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.204639 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh475\" (UniqueName: \"kubernetes.io/projected/0819d4e1-7204-41b9-80c0-1b8e86fb211d-kube-api-access-hh475\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.204651 4904 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0819d4e1-7204-41b9-80c0-1b8e86fb211d-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.217930 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.219846 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8656f88b4-fzcg6" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.305680 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3b57aa5-707b-41f7-af28-34a33cb8e84e-config\") pod \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\" (UID: \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\") " Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.306098 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-combined-ca-bundle\") pod \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.306126 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-sg-core-conf-yaml\") pod \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.306992 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b57aa5-707b-41f7-af28-34a33cb8e84e-combined-ca-bundle\") pod \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\" (UID: \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\") " Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.307034 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-log-httpd\") pod \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.307105 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-config-data\") pod \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.307266 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-run-httpd\") pod \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.307339 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h444n\" (UniqueName: \"kubernetes.io/projected/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-kube-api-access-h444n\") pod \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.307375 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-scripts\") pod \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\" (UID: \"9730fc29-9b72-4318-9f83-00fc9e8a7dc5\") " Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.307566 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dbll\" (UniqueName: \"kubernetes.io/projected/e3b57aa5-707b-41f7-af28-34a33cb8e84e-kube-api-access-4dbll\") pod \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\" (UID: \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\") " Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.307672 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3b57aa5-707b-41f7-af28-34a33cb8e84e-ovndb-tls-certs\") pod \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\" (UID: \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\") " Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.307725 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e3b57aa5-707b-41f7-af28-34a33cb8e84e-httpd-config\") pod \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\" (UID: \"e3b57aa5-707b-41f7-af28-34a33cb8e84e\") " Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.309466 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9730fc29-9b72-4318-9f83-00fc9e8a7dc5" (UID: "9730fc29-9b72-4318-9f83-00fc9e8a7dc5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.311169 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9730fc29-9b72-4318-9f83-00fc9e8a7dc5" (UID: "9730fc29-9b72-4318-9f83-00fc9e8a7dc5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.311937 4904 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.313688 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0819d4e1-7204-41b9-80c0-1b8e86fb211d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0819d4e1-7204-41b9-80c0-1b8e86fb211d" (UID: "0819d4e1-7204-41b9-80c0-1b8e86fb211d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.317555 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-scripts" (OuterVolumeSpecName: "scripts") pod "9730fc29-9b72-4318-9f83-00fc9e8a7dc5" (UID: "9730fc29-9b72-4318-9f83-00fc9e8a7dc5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.320217 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-kube-api-access-h444n" (OuterVolumeSpecName: "kube-api-access-h444n") pod "9730fc29-9b72-4318-9f83-00fc9e8a7dc5" (UID: "9730fc29-9b72-4318-9f83-00fc9e8a7dc5"). InnerVolumeSpecName "kube-api-access-h444n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.320359 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b57aa5-707b-41f7-af28-34a33cb8e84e-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e3b57aa5-707b-41f7-af28-34a33cb8e84e" (UID: "e3b57aa5-707b-41f7-af28-34a33cb8e84e"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.323740 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b57aa5-707b-41f7-af28-34a33cb8e84e-kube-api-access-4dbll" (OuterVolumeSpecName: "kube-api-access-4dbll") pod "e3b57aa5-707b-41f7-af28-34a33cb8e84e" (UID: "e3b57aa5-707b-41f7-af28-34a33cb8e84e"). InnerVolumeSpecName "kube-api-access-4dbll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.325673 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0819d4e1-7204-41b9-80c0-1b8e86fb211d-config-data" (OuterVolumeSpecName: "config-data") pod "0819d4e1-7204-41b9-80c0-1b8e86fb211d" (UID: "0819d4e1-7204-41b9-80c0-1b8e86fb211d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.351658 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0819d4e1-7204-41b9-80c0-1b8e86fb211d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0819d4e1-7204-41b9-80c0-1b8e86fb211d" (UID: "0819d4e1-7204-41b9-80c0-1b8e86fb211d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.373214 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9730fc29-9b72-4318-9f83-00fc9e8a7dc5" (UID: "9730fc29-9b72-4318-9f83-00fc9e8a7dc5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.397859 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b57aa5-707b-41f7-af28-34a33cb8e84e-config" (OuterVolumeSpecName: "config") pod "e3b57aa5-707b-41f7-af28-34a33cb8e84e" (UID: "e3b57aa5-707b-41f7-af28-34a33cb8e84e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.412635 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.412680 4904 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0819d4e1-7204-41b9-80c0-1b8e86fb211d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.412697 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0819d4e1-7204-41b9-80c0-1b8e86fb211d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.412709 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dbll\" (UniqueName: \"kubernetes.io/projected/e3b57aa5-707b-41f7-af28-34a33cb8e84e-kube-api-access-4dbll\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.412744 4904 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e3b57aa5-707b-41f7-af28-34a33cb8e84e-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.412761 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3b57aa5-707b-41f7-af28-34a33cb8e84e-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.412775 4904 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.412787 4904 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.412799 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0819d4e1-7204-41b9-80c0-1b8e86fb211d-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.412811 4904 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.412823 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h444n\" (UniqueName: \"kubernetes.io/projected/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-kube-api-access-h444n\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.412834 4904 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.425530 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9730fc29-9b72-4318-9f83-00fc9e8a7dc5" (UID: "9730fc29-9b72-4318-9f83-00fc9e8a7dc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.434004 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b57aa5-707b-41f7-af28-34a33cb8e84e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3b57aa5-707b-41f7-af28-34a33cb8e84e" (UID: "e3b57aa5-707b-41f7-af28-34a33cb8e84e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.452487 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b57aa5-707b-41f7-af28-34a33cb8e84e-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e3b57aa5-707b-41f7-af28-34a33cb8e84e" (UID: "e3b57aa5-707b-41f7-af28-34a33cb8e84e"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.462222 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-config-data" (OuterVolumeSpecName: "config-data") pod "9730fc29-9b72-4318-9f83-00fc9e8a7dc5" (UID: "9730fc29-9b72-4318-9f83-00fc9e8a7dc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.516349 4904 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3b57aa5-707b-41f7-af28-34a33cb8e84e-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.516393 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.516410 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b57aa5-707b-41f7-af28-34a33cb8e84e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:42 crc kubenswrapper[4904]: I0223 10:26:42.516422 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9730fc29-9b72-4318-9f83-00fc9e8a7dc5-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.029938 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.032706 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-d55dc77cc-gg7pn" event={"ID":"7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f","Type":"ContainerStarted","Data":"31dbad4b7da3c1d7ccea2b714c155395a42c22bf68a76703789d8e4d48ab3e25"} Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.032878 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.036184 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9730fc29-9b72-4318-9f83-00fc9e8a7dc5","Type":"ContainerDied","Data":"5c1d1726c3e67ce35471a2ad92457ca176b9ea40765bd8096ee5d0983279f033"} Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.036249 4904 scope.go:117] "RemoveContainer" containerID="b180e79814c3477943d39a7a5afdb94e895321c51d4a6e22f2734590303e0c20" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.036203 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.038076 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8","Type":"ContainerStarted","Data":"8f2d033febd17142f4dd0348d822de71a28d5f2d848ecfb116161603d6a4a16f"} Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.039860 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8656f88b4-fzcg6" event={"ID":"e3b57aa5-707b-41f7-af28-34a33cb8e84e","Type":"ContainerDied","Data":"79326774c33c324a998aab923fc9814a15814e24ac968614b4fa889a42e2f192"} Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.039883 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8656f88b4-fzcg6" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.070884 4904 scope.go:117] "RemoveContainer" containerID="26f7ddcb940fe1851947e658fe7181b5152480c3a13e982c40afad73cbdb5fbf" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.085328 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-d55dc77cc-gg7pn" podStartSLOduration=11.085304746 podStartE2EDuration="11.085304746s" podCreationTimestamp="2026-02-23 10:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:26:43.061994502 +0000 UTC m=+1236.482368015" watchObservedRunningTime="2026-02-23 10:26:43.085304746 +0000 UTC m=+1236.505678259" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.112683 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.950341254 podStartE2EDuration="16.112661595s" podCreationTimestamp="2026-02-23 10:26:27 +0000 UTC" firstStartedPulling="2026-02-23 10:26:28.559125348 +0000 UTC m=+1221.979498871" lastFinishedPulling="2026-02-23 10:26:41.721445709 +0000 UTC m=+1235.141819212" observedRunningTime="2026-02-23 10:26:43.108171637 +0000 UTC m=+1236.528545150" watchObservedRunningTime="2026-02-23 10:26:43.112661595 +0000 UTC m=+1236.533035108" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.239899 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.243384 4904 scope.go:117] "RemoveContainer" containerID="650daa17446f499de6eb449baa1e9e6b69c71e97f0630f7012af3372f59974ae" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.297782 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.303140 4904 scope.go:117] "RemoveContainer" containerID="77881d5a564f0133b20822c5bc81ab16294b70510ae79e7fb54b4c6d0a15be4e" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.308495 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.329319 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8656f88b4-fzcg6"] Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.338897 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8656f88b4-fzcg6"] Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.349606 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:26:43 crc kubenswrapper[4904]: E0223 10:26:43.350090 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9730fc29-9b72-4318-9f83-00fc9e8a7dc5" containerName="proxy-httpd" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.350108 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9730fc29-9b72-4318-9f83-00fc9e8a7dc5" containerName="proxy-httpd" Feb 23 10:26:43 crc kubenswrapper[4904]: E0223 10:26:43.350120 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b57aa5-707b-41f7-af28-34a33cb8e84e" containerName="neutron-httpd" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.350127 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b57aa5-707b-41f7-af28-34a33cb8e84e" containerName="neutron-httpd" Feb 23 10:26:43 crc kubenswrapper[4904]: E0223 10:26:43.350140 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b57aa5-707b-41f7-af28-34a33cb8e84e" containerName="neutron-api" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.350147 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b57aa5-707b-41f7-af28-34a33cb8e84e" containerName="neutron-api" Feb 23 10:26:43 crc kubenswrapper[4904]: E0223 10:26:43.350160 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9730fc29-9b72-4318-9f83-00fc9e8a7dc5" containerName="sg-core" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.350167 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9730fc29-9b72-4318-9f83-00fc9e8a7dc5" containerName="sg-core" Feb 23 10:26:43 crc kubenswrapper[4904]: E0223 10:26:43.350179 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0819d4e1-7204-41b9-80c0-1b8e86fb211d" containerName="glance-httpd" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.350185 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0819d4e1-7204-41b9-80c0-1b8e86fb211d" containerName="glance-httpd" Feb 23 10:26:43 crc kubenswrapper[4904]: E0223 10:26:43.350205 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0819d4e1-7204-41b9-80c0-1b8e86fb211d" containerName="glance-log" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.350211 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0819d4e1-7204-41b9-80c0-1b8e86fb211d" containerName="glance-log" Feb 23 10:26:43 crc kubenswrapper[4904]: E0223 10:26:43.350231 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9730fc29-9b72-4318-9f83-00fc9e8a7dc5" containerName="ceilometer-central-agent" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.350237 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9730fc29-9b72-4318-9f83-00fc9e8a7dc5" containerName="ceilometer-central-agent" Feb 23 10:26:43 crc kubenswrapper[4904]: E0223 10:26:43.350253 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9730fc29-9b72-4318-9f83-00fc9e8a7dc5" containerName="ceilometer-notification-agent" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.350261 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9730fc29-9b72-4318-9f83-00fc9e8a7dc5" containerName="ceilometer-notification-agent" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.350466 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="9730fc29-9b72-4318-9f83-00fc9e8a7dc5" containerName="sg-core" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.350478 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="9730fc29-9b72-4318-9f83-00fc9e8a7dc5" containerName="proxy-httpd" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.350492 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b57aa5-707b-41f7-af28-34a33cb8e84e" containerName="neutron-api" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.350505 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="9730fc29-9b72-4318-9f83-00fc9e8a7dc5" containerName="ceilometer-notification-agent" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.350527 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="9730fc29-9b72-4318-9f83-00fc9e8a7dc5" containerName="ceilometer-central-agent" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.350537 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b57aa5-707b-41f7-af28-34a33cb8e84e" containerName="neutron-httpd" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.350551 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0819d4e1-7204-41b9-80c0-1b8e86fb211d" containerName="glance-httpd" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.350569 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0819d4e1-7204-41b9-80c0-1b8e86fb211d" containerName="glance-log" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.352518 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.357652 4904 scope.go:117] "RemoveContainer" containerID="6661758c00cc5a88a75673faa655d47686b45d837e898dc85caf3fb0bce6dd4a" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.357899 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.358012 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.367654 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.388264 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.399634 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.408751 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.410762 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.413391 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.413616 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.431423 4904 scope.go:117] "RemoveContainer" containerID="c21d2acd45dd7a1af73b2aa905dfb559e04cd7f63220ef45099a657fadfc5fb8" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.444638 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.468422 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2d1c60a-e612-442b-9c87-28262f0fcde6-log-httpd\") pod \"ceilometer-0\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " pod="openstack/ceilometer-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.468488 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2d1c60a-e612-442b-9c87-28262f0fcde6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " pod="openstack/ceilometer-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.468525 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2d1c60a-e612-442b-9c87-28262f0fcde6-config-data\") pod \"ceilometer-0\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " pod="openstack/ceilometer-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.468545 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2d1c60a-e612-442b-9c87-28262f0fcde6-scripts\") pod \"ceilometer-0\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " pod="openstack/ceilometer-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.468622 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d1c60a-e612-442b-9c87-28262f0fcde6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " pod="openstack/ceilometer-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.468666 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2d1c60a-e612-442b-9c87-28262f0fcde6-run-httpd\") pod \"ceilometer-0\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " pod="openstack/ceilometer-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.468702 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjkzr\" (UniqueName: \"kubernetes.io/projected/d2d1c60a-e612-442b-9c87-28262f0fcde6-kube-api-access-jjkzr\") pod \"ceilometer-0\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " pod="openstack/ceilometer-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.570670 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/262fefd2-494e-4121-97f8-9c3e66e9afd7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.570849 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d1c60a-e612-442b-9c87-28262f0fcde6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " pod="openstack/ceilometer-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.570913 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/262fefd2-494e-4121-97f8-9c3e66e9afd7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.570942 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/262fefd2-494e-4121-97f8-9c3e66e9afd7-logs\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.570999 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2d1c60a-e612-442b-9c87-28262f0fcde6-run-httpd\") pod \"ceilometer-0\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " pod="openstack/ceilometer-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.571025 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/262fefd2-494e-4121-97f8-9c3e66e9afd7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.571075 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjkzr\" (UniqueName: \"kubernetes.io/projected/d2d1c60a-e612-442b-9c87-28262f0fcde6-kube-api-access-jjkzr\") pod \"ceilometer-0\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " pod="openstack/ceilometer-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.571098 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctcxk\" (UniqueName: \"kubernetes.io/projected/262fefd2-494e-4121-97f8-9c3e66e9afd7-kube-api-access-ctcxk\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.571133 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/262fefd2-494e-4121-97f8-9c3e66e9afd7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.571166 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.571205 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/262fefd2-494e-4121-97f8-9c3e66e9afd7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.571254 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2d1c60a-e612-442b-9c87-28262f0fcde6-log-httpd\") pod \"ceilometer-0\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " pod="openstack/ceilometer-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.571303 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2d1c60a-e612-442b-9c87-28262f0fcde6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " pod="openstack/ceilometer-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.571353 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2d1c60a-e612-442b-9c87-28262f0fcde6-config-data\") pod \"ceilometer-0\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " pod="openstack/ceilometer-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.571376 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2d1c60a-e612-442b-9c87-28262f0fcde6-scripts\") pod \"ceilometer-0\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " pod="openstack/ceilometer-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.574220 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2d1c60a-e612-442b-9c87-28262f0fcde6-run-httpd\") pod \"ceilometer-0\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " pod="openstack/ceilometer-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.576877 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2d1c60a-e612-442b-9c87-28262f0fcde6-log-httpd\") pod \"ceilometer-0\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " pod="openstack/ceilometer-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.584653 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2d1c60a-e612-442b-9c87-28262f0fcde6-scripts\") pod \"ceilometer-0\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " pod="openstack/ceilometer-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.586803 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2d1c60a-e612-442b-9c87-28262f0fcde6-config-data\") pod \"ceilometer-0\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " pod="openstack/ceilometer-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.588022 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d1c60a-e612-442b-9c87-28262f0fcde6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " pod="openstack/ceilometer-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.594310 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2d1c60a-e612-442b-9c87-28262f0fcde6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " pod="openstack/ceilometer-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.601560 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjkzr\" (UniqueName: \"kubernetes.io/projected/d2d1c60a-e612-442b-9c87-28262f0fcde6-kube-api-access-jjkzr\") pod \"ceilometer-0\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " pod="openstack/ceilometer-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.674483 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.674653 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/262fefd2-494e-4121-97f8-9c3e66e9afd7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.674810 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/262fefd2-494e-4121-97f8-9c3e66e9afd7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.674944 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/262fefd2-494e-4121-97f8-9c3e66e9afd7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.674998 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/262fefd2-494e-4121-97f8-9c3e66e9afd7-logs\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.675021 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/262fefd2-494e-4121-97f8-9c3e66e9afd7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.675086 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctcxk\" (UniqueName: \"kubernetes.io/projected/262fefd2-494e-4121-97f8-9c3e66e9afd7-kube-api-access-ctcxk\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.675135 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/262fefd2-494e-4121-97f8-9c3e66e9afd7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.676306 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.677166 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/262fefd2-494e-4121-97f8-9c3e66e9afd7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.677341 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/262fefd2-494e-4121-97f8-9c3e66e9afd7-logs\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.677444 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.691480 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/262fefd2-494e-4121-97f8-9c3e66e9afd7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.692289 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/262fefd2-494e-4121-97f8-9c3e66e9afd7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.718028 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctcxk\" (UniqueName: \"kubernetes.io/projected/262fefd2-494e-4121-97f8-9c3e66e9afd7-kube-api-access-ctcxk\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.719233 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/262fefd2-494e-4121-97f8-9c3e66e9afd7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.771370 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/262fefd2-494e-4121-97f8-9c3e66e9afd7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.803645 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.806665 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-56bb764cb4-74prt" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.808540 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"262fefd2-494e-4121-97f8-9c3e66e9afd7\") " pod="openstack/glance-default-internal-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.836103 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.984569 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"370d441a-a231-46cd-b528-1a80d8c593bc\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.984895 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/370d441a-a231-46cd-b528-1a80d8c593bc-combined-ca-bundle\") pod \"370d441a-a231-46cd-b528-1a80d8c593bc\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.984924 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/370d441a-a231-46cd-b528-1a80d8c593bc-logs\") pod \"370d441a-a231-46cd-b528-1a80d8c593bc\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.984979 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4vkc\" (UniqueName: \"kubernetes.io/projected/370d441a-a231-46cd-b528-1a80d8c593bc-kube-api-access-l4vkc\") pod \"370d441a-a231-46cd-b528-1a80d8c593bc\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.985022 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/370d441a-a231-46cd-b528-1a80d8c593bc-scripts\") pod \"370d441a-a231-46cd-b528-1a80d8c593bc\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.985199 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/370d441a-a231-46cd-b528-1a80d8c593bc-config-data\") pod \"370d441a-a231-46cd-b528-1a80d8c593bc\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.985311 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/370d441a-a231-46cd-b528-1a80d8c593bc-httpd-run\") pod \"370d441a-a231-46cd-b528-1a80d8c593bc\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.985384 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/370d441a-a231-46cd-b528-1a80d8c593bc-public-tls-certs\") pod \"370d441a-a231-46cd-b528-1a80d8c593bc\" (UID: \"370d441a-a231-46cd-b528-1a80d8c593bc\") " Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.985564 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f8c7c9fd4-2449r"] Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.985561 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/370d441a-a231-46cd-b528-1a80d8c593bc-logs" (OuterVolumeSpecName: "logs") pod "370d441a-a231-46cd-b528-1a80d8c593bc" (UID: "370d441a-a231-46cd-b528-1a80d8c593bc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.985858 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-f8c7c9fd4-2449r" podUID="a1f8a283-bc3b-4cd4-ab91-244942a44e58" containerName="placement-log" containerID="cri-o://f3338b2973d059c8719646d74e13e18b08a68e9836148c28b867fbefa7e910da" gracePeriod=30 Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.986302 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/370d441a-a231-46cd-b528-1a80d8c593bc-logs\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.986386 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-f8c7c9fd4-2449r" podUID="a1f8a283-bc3b-4cd4-ab91-244942a44e58" containerName="placement-api" containerID="cri-o://ebcb92569aba0fb0c56f61c9a6b39976ab055ee18b79386a345bd5d25bc94f4e" gracePeriod=30 Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.986627 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/370d441a-a231-46cd-b528-1a80d8c593bc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "370d441a-a231-46cd-b528-1a80d8c593bc" (UID: "370d441a-a231-46cd-b528-1a80d8c593bc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.988368 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "370d441a-a231-46cd-b528-1a80d8c593bc" (UID: "370d441a-a231-46cd-b528-1a80d8c593bc"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.990534 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/370d441a-a231-46cd-b528-1a80d8c593bc-scripts" (OuterVolumeSpecName: "scripts") pod "370d441a-a231-46cd-b528-1a80d8c593bc" (UID: "370d441a-a231-46cd-b528-1a80d8c593bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:43 crc kubenswrapper[4904]: I0223 10:26:43.999809 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/370d441a-a231-46cd-b528-1a80d8c593bc-kube-api-access-l4vkc" (OuterVolumeSpecName: "kube-api-access-l4vkc") pod "370d441a-a231-46cd-b528-1a80d8c593bc" (UID: "370d441a-a231-46cd-b528-1a80d8c593bc"). InnerVolumeSpecName "kube-api-access-l4vkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.042995 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.088614 4904 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/370d441a-a231-46cd-b528-1a80d8c593bc-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.088659 4904 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.088669 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4vkc\" (UniqueName: \"kubernetes.io/projected/370d441a-a231-46cd-b528-1a80d8c593bc-kube-api-access-l4vkc\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.088681 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/370d441a-a231-46cd-b528-1a80d8c593bc-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.090944 4904 generic.go:334] "Generic (PLEG): container finished" podID="370d441a-a231-46cd-b528-1a80d8c593bc" containerID="2922040b7dd2ecaecc651fed679cf4ddac18ae6910f604fffabf2eb10a01f30a" exitCode=0 Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.091746 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.092616 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"370d441a-a231-46cd-b528-1a80d8c593bc","Type":"ContainerDied","Data":"2922040b7dd2ecaecc651fed679cf4ddac18ae6910f604fffabf2eb10a01f30a"} Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.092657 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"370d441a-a231-46cd-b528-1a80d8c593bc","Type":"ContainerDied","Data":"7ec7a6087ea6691d7c709a4f5ee9b21f38b0156fea07f3420e86f544e92affed"} Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.092674 4904 scope.go:117] "RemoveContainer" containerID="2922040b7dd2ecaecc651fed679cf4ddac18ae6910f604fffabf2eb10a01f30a" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.098885 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/370d441a-a231-46cd-b528-1a80d8c593bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "370d441a-a231-46cd-b528-1a80d8c593bc" (UID: "370d441a-a231-46cd-b528-1a80d8c593bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.120015 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/370d441a-a231-46cd-b528-1a80d8c593bc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "370d441a-a231-46cd-b528-1a80d8c593bc" (UID: "370d441a-a231-46cd-b528-1a80d8c593bc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.120283 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/370d441a-a231-46cd-b528-1a80d8c593bc-config-data" (OuterVolumeSpecName: "config-data") pod "370d441a-a231-46cd-b528-1a80d8c593bc" (UID: "370d441a-a231-46cd-b528-1a80d8c593bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.167543 4904 scope.go:117] "RemoveContainer" containerID="200469776c22b49cc82dd7c4a0d10e483f91f1691c5b73b75cc5db23de4b1d23" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.172861 4904 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.193369 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/370d441a-a231-46cd-b528-1a80d8c593bc-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.193400 4904 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/370d441a-a231-46cd-b528-1a80d8c593bc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.193413 4904 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.193422 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/370d441a-a231-46cd-b528-1a80d8c593bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.198640 4904 scope.go:117] "RemoveContainer" containerID="2922040b7dd2ecaecc651fed679cf4ddac18ae6910f604fffabf2eb10a01f30a" Feb 23 10:26:44 crc kubenswrapper[4904]: E0223 10:26:44.199299 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2922040b7dd2ecaecc651fed679cf4ddac18ae6910f604fffabf2eb10a01f30a\": container with ID starting with 2922040b7dd2ecaecc651fed679cf4ddac18ae6910f604fffabf2eb10a01f30a not found: ID does not exist" containerID="2922040b7dd2ecaecc651fed679cf4ddac18ae6910f604fffabf2eb10a01f30a" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.199340 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2922040b7dd2ecaecc651fed679cf4ddac18ae6910f604fffabf2eb10a01f30a"} err="failed to get container status \"2922040b7dd2ecaecc651fed679cf4ddac18ae6910f604fffabf2eb10a01f30a\": rpc error: code = NotFound desc = could not find container \"2922040b7dd2ecaecc651fed679cf4ddac18ae6910f604fffabf2eb10a01f30a\": container with ID starting with 2922040b7dd2ecaecc651fed679cf4ddac18ae6910f604fffabf2eb10a01f30a not found: ID does not exist" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.199363 4904 scope.go:117] "RemoveContainer" containerID="200469776c22b49cc82dd7c4a0d10e483f91f1691c5b73b75cc5db23de4b1d23" Feb 23 10:26:44 crc kubenswrapper[4904]: E0223 10:26:44.199854 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"200469776c22b49cc82dd7c4a0d10e483f91f1691c5b73b75cc5db23de4b1d23\": container with ID starting with 200469776c22b49cc82dd7c4a0d10e483f91f1691c5b73b75cc5db23de4b1d23 not found: ID does not exist" containerID="200469776c22b49cc82dd7c4a0d10e483f91f1691c5b73b75cc5db23de4b1d23" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.199877 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"200469776c22b49cc82dd7c4a0d10e483f91f1691c5b73b75cc5db23de4b1d23"} err="failed to get container status \"200469776c22b49cc82dd7c4a0d10e483f91f1691c5b73b75cc5db23de4b1d23\": rpc error: code = NotFound desc = could not find container \"200469776c22b49cc82dd7c4a0d10e483f91f1691c5b73b75cc5db23de4b1d23\": container with ID starting with 200469776c22b49cc82dd7c4a0d10e483f91f1691c5b73b75cc5db23de4b1d23 not found: ID does not exist" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.447796 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.464002 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.475977 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.491756 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 10:26:44 crc kubenswrapper[4904]: E0223 10:26:44.492318 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="370d441a-a231-46cd-b528-1a80d8c593bc" containerName="glance-log" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.492336 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="370d441a-a231-46cd-b528-1a80d8c593bc" containerName="glance-log" Feb 23 10:26:44 crc kubenswrapper[4904]: E0223 10:26:44.492358 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="370d441a-a231-46cd-b528-1a80d8c593bc" containerName="glance-httpd" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.492367 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="370d441a-a231-46cd-b528-1a80d8c593bc" containerName="glance-httpd" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.492594 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="370d441a-a231-46cd-b528-1a80d8c593bc" containerName="glance-httpd" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.492610 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="370d441a-a231-46cd-b528-1a80d8c593bc" containerName="glance-log" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.493833 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.500069 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.502087 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.505474 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.603626 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/070efab3-5f9b-464c-8717-89ddd79f1ec9-scripts\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") " pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.603740 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/070efab3-5f9b-464c-8717-89ddd79f1ec9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") " pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.603780 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/070efab3-5f9b-464c-8717-89ddd79f1ec9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") " pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.603805 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/070efab3-5f9b-464c-8717-89ddd79f1ec9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") " pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.604012 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") " pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.604130 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6wjp\" (UniqueName: \"kubernetes.io/projected/070efab3-5f9b-464c-8717-89ddd79f1ec9-kube-api-access-t6wjp\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") " pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.604206 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/070efab3-5f9b-464c-8717-89ddd79f1ec9-config-data\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") " pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.604416 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/070efab3-5f9b-464c-8717-89ddd79f1ec9-logs\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") " pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.706554 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/070efab3-5f9b-464c-8717-89ddd79f1ec9-logs\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") " pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.706934 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/070efab3-5f9b-464c-8717-89ddd79f1ec9-scripts\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") " pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.707001 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/070efab3-5f9b-464c-8717-89ddd79f1ec9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") " pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.707031 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/070efab3-5f9b-464c-8717-89ddd79f1ec9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") " pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.707058 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/070efab3-5f9b-464c-8717-89ddd79f1ec9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") " pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.707126 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") " pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.707164 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6wjp\" (UniqueName: \"kubernetes.io/projected/070efab3-5f9b-464c-8717-89ddd79f1ec9-kube-api-access-t6wjp\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") " pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.707198 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/070efab3-5f9b-464c-8717-89ddd79f1ec9-config-data\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") " pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.707299 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/070efab3-5f9b-464c-8717-89ddd79f1ec9-logs\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") " pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.707467 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/070efab3-5f9b-464c-8717-89ddd79f1ec9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") " pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.707706 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.714979 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/070efab3-5f9b-464c-8717-89ddd79f1ec9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") " pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.715418 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/070efab3-5f9b-464c-8717-89ddd79f1ec9-config-data\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") " pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.718941 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/070efab3-5f9b-464c-8717-89ddd79f1ec9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") " pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.720235 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/070efab3-5f9b-464c-8717-89ddd79f1ec9-scripts\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") " pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.736026 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6wjp\" (UniqueName: \"kubernetes.io/projected/070efab3-5f9b-464c-8717-89ddd79f1ec9-kube-api-access-t6wjp\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") " pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.742177 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.747066 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"070efab3-5f9b-464c-8717-89ddd79f1ec9\") " pod="openstack/glance-default-external-api-0" Feb 23 10:26:44 crc kubenswrapper[4904]: I0223 10:26:44.820440 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 10:26:45 crc kubenswrapper[4904]: I0223 10:26:45.113752 4904 generic.go:334] "Generic (PLEG): container finished" podID="a1f8a283-bc3b-4cd4-ab91-244942a44e58" containerID="f3338b2973d059c8719646d74e13e18b08a68e9836148c28b867fbefa7e910da" exitCode=143 Feb 23 10:26:45 crc kubenswrapper[4904]: I0223 10:26:45.113880 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f8c7c9fd4-2449r" event={"ID":"a1f8a283-bc3b-4cd4-ab91-244942a44e58","Type":"ContainerDied","Data":"f3338b2973d059c8719646d74e13e18b08a68e9836148c28b867fbefa7e910da"} Feb 23 10:26:45 crc kubenswrapper[4904]: I0223 10:26:45.120183 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2d1c60a-e612-442b-9c87-28262f0fcde6","Type":"ContainerStarted","Data":"12b809d2615d4538a963748bfbaa9f5c6fc2d61d9908957d5e91b084164af64b"} Feb 23 10:26:45 crc kubenswrapper[4904]: I0223 10:26:45.123366 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"262fefd2-494e-4121-97f8-9c3e66e9afd7","Type":"ContainerStarted","Data":"4e331b096bc61f2b49d436415273b215dfb18666198f3dcb770586c1a9f932ad"} Feb 23 10:26:45 crc kubenswrapper[4904]: I0223 10:26:45.276190 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0819d4e1-7204-41b9-80c0-1b8e86fb211d" path="/var/lib/kubelet/pods/0819d4e1-7204-41b9-80c0-1b8e86fb211d/volumes" Feb 23 10:26:45 crc kubenswrapper[4904]: I0223 10:26:45.277970 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="370d441a-a231-46cd-b528-1a80d8c593bc" path="/var/lib/kubelet/pods/370d441a-a231-46cd-b528-1a80d8c593bc/volumes" Feb 23 10:26:45 crc kubenswrapper[4904]: I0223 10:26:45.278816 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9730fc29-9b72-4318-9f83-00fc9e8a7dc5" path="/var/lib/kubelet/pods/9730fc29-9b72-4318-9f83-00fc9e8a7dc5/volumes" Feb 23 10:26:45 crc kubenswrapper[4904]: I0223 10:26:45.280308 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3b57aa5-707b-41f7-af28-34a33cb8e84e" path="/var/lib/kubelet/pods/e3b57aa5-707b-41f7-af28-34a33cb8e84e/volumes" Feb 23 10:26:45 crc kubenswrapper[4904]: I0223 10:26:45.523202 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 10:26:45 crc kubenswrapper[4904]: W0223 10:26:45.524338 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod070efab3_5f9b_464c_8717_89ddd79f1ec9.slice/crio-6433e0894db25ad4be1eb71afb428073d88e7a66251958e8a45910e23c94b7f6 WatchSource:0}: Error finding container 6433e0894db25ad4be1eb71afb428073d88e7a66251958e8a45910e23c94b7f6: Status 404 returned error can't find the container with id 6433e0894db25ad4be1eb71afb428073d88e7a66251958e8a45910e23c94b7f6 Feb 23 10:26:46 crc kubenswrapper[4904]: I0223 10:26:46.157476 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2d1c60a-e612-442b-9c87-28262f0fcde6","Type":"ContainerStarted","Data":"5876f58320ba2fda9d5a3bde5a784d7eea607299efab8816380aac2c39de4f74"} Feb 23 10:26:46 crc kubenswrapper[4904]: I0223 10:26:46.157930 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2d1c60a-e612-442b-9c87-28262f0fcde6","Type":"ContainerStarted","Data":"964e6f5458863f11647447362d607d4fb4a15dda069cbd90a2e7693f5c49b51f"} Feb 23 10:26:46 crc kubenswrapper[4904]: I0223 10:26:46.159084 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"070efab3-5f9b-464c-8717-89ddd79f1ec9","Type":"ContainerStarted","Data":"6433e0894db25ad4be1eb71afb428073d88e7a66251958e8a45910e23c94b7f6"} Feb 23 10:26:46 crc kubenswrapper[4904]: I0223 10:26:46.160680 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"262fefd2-494e-4121-97f8-9c3e66e9afd7","Type":"ContainerStarted","Data":"281557a7b834775f58ec19bf7d5cf083904cc97d42f3d803ec44ae019b11ad3a"} Feb 23 10:26:46 crc kubenswrapper[4904]: I0223 10:26:46.978836 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-7bvr2"] Feb 23 10:26:46 crc kubenswrapper[4904]: I0223 10:26:46.980773 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7bvr2" Feb 23 10:26:46 crc kubenswrapper[4904]: I0223 10:26:46.991360 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7bvr2"] Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.092960 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-8vhcb"] Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.094568 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8vhcb" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.097109 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28443614-6612-4fc3-9043-782b2175ddb3-operator-scripts\") pod \"nova-api-db-create-7bvr2\" (UID: \"28443614-6612-4fc3-9043-782b2175ddb3\") " pod="openstack/nova-api-db-create-7bvr2" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.097227 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-865gr\" (UniqueName: \"kubernetes.io/projected/28443614-6612-4fc3-9043-782b2175ddb3-kube-api-access-865gr\") pod \"nova-api-db-create-7bvr2\" (UID: \"28443614-6612-4fc3-9043-782b2175ddb3\") " pod="openstack/nova-api-db-create-7bvr2" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.115968 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ab80-account-create-update-j58px"] Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.118705 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ab80-account-create-update-j58px" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.129845 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8vhcb"] Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.130055 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.185610 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2d1c60a-e612-442b-9c87-28262f0fcde6","Type":"ContainerStarted","Data":"dd2f5440f82603511f11622af292f0ed0d1a0833269abd220b84be0b3771841e"} Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.187096 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"070efab3-5f9b-464c-8717-89ddd79f1ec9","Type":"ContainerStarted","Data":"0e594c18e45b5868a80a042d5bc5b3a44ba4dd494d209b7df17a2b48937529dd"} Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.187121 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"070efab3-5f9b-464c-8717-89ddd79f1ec9","Type":"ContainerStarted","Data":"c299b4ce29445d9cec4799029335d4e9391b614c3277b64436a9e93625aa908d"} Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.194131 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"262fefd2-494e-4121-97f8-9c3e66e9afd7","Type":"ContainerStarted","Data":"c9db68fd41dedf88d18ca6bd97bf127a190fe8dbc194cc871adf5e3ee829284e"} Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.195363 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ab80-account-create-update-j58px"] Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.200041 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfsbj\" (UniqueName: \"kubernetes.io/projected/4c368d56-bebc-433e-8519-9b8ab1ef51a4-kube-api-access-zfsbj\") pod \"nova-api-ab80-account-create-update-j58px\" (UID: \"4c368d56-bebc-433e-8519-9b8ab1ef51a4\") " pod="openstack/nova-api-ab80-account-create-update-j58px" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.200148 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28443614-6612-4fc3-9043-782b2175ddb3-operator-scripts\") pod \"nova-api-db-create-7bvr2\" (UID: \"28443614-6612-4fc3-9043-782b2175ddb3\") " pod="openstack/nova-api-db-create-7bvr2" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.200213 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/882ac903-0c5a-48c6-977d-645792363692-operator-scripts\") pod \"nova-cell0-db-create-8vhcb\" (UID: \"882ac903-0c5a-48c6-977d-645792363692\") " pod="openstack/nova-cell0-db-create-8vhcb" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.200264 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-865gr\" (UniqueName: \"kubernetes.io/projected/28443614-6612-4fc3-9043-782b2175ddb3-kube-api-access-865gr\") pod \"nova-api-db-create-7bvr2\" (UID: \"28443614-6612-4fc3-9043-782b2175ddb3\") " pod="openstack/nova-api-db-create-7bvr2" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.200767 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c368d56-bebc-433e-8519-9b8ab1ef51a4-operator-scripts\") pod \"nova-api-ab80-account-create-update-j58px\" (UID: \"4c368d56-bebc-433e-8519-9b8ab1ef51a4\") " pod="openstack/nova-api-ab80-account-create-update-j58px" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.200799 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5h87\" (UniqueName: \"kubernetes.io/projected/882ac903-0c5a-48c6-977d-645792363692-kube-api-access-p5h87\") pod \"nova-cell0-db-create-8vhcb\" (UID: \"882ac903-0c5a-48c6-977d-645792363692\") " pod="openstack/nova-cell0-db-create-8vhcb" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.201140 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28443614-6612-4fc3-9043-782b2175ddb3-operator-scripts\") pod \"nova-api-db-create-7bvr2\" (UID: \"28443614-6612-4fc3-9043-782b2175ddb3\") " pod="openstack/nova-api-db-create-7bvr2" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.248959 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-865gr\" (UniqueName: \"kubernetes.io/projected/28443614-6612-4fc3-9043-782b2175ddb3-kube-api-access-865gr\") pod \"nova-api-db-create-7bvr2\" (UID: \"28443614-6612-4fc3-9043-782b2175ddb3\") " pod="openstack/nova-api-db-create-7bvr2" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.296524 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7bvr2" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.307425 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c368d56-bebc-433e-8519-9b8ab1ef51a4-operator-scripts\") pod \"nova-api-ab80-account-create-update-j58px\" (UID: \"4c368d56-bebc-433e-8519-9b8ab1ef51a4\") " pod="openstack/nova-api-ab80-account-create-update-j58px" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.307490 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5h87\" (UniqueName: \"kubernetes.io/projected/882ac903-0c5a-48c6-977d-645792363692-kube-api-access-p5h87\") pod \"nova-cell0-db-create-8vhcb\" (UID: \"882ac903-0c5a-48c6-977d-645792363692\") " pod="openstack/nova-cell0-db-create-8vhcb" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.307610 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfsbj\" (UniqueName: \"kubernetes.io/projected/4c368d56-bebc-433e-8519-9b8ab1ef51a4-kube-api-access-zfsbj\") pod \"nova-api-ab80-account-create-update-j58px\" (UID: \"4c368d56-bebc-433e-8519-9b8ab1ef51a4\") " pod="openstack/nova-api-ab80-account-create-update-j58px" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.307691 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/882ac903-0c5a-48c6-977d-645792363692-operator-scripts\") pod \"nova-cell0-db-create-8vhcb\" (UID: \"882ac903-0c5a-48c6-977d-645792363692\") " pod="openstack/nova-cell0-db-create-8vhcb" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.310424 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c368d56-bebc-433e-8519-9b8ab1ef51a4-operator-scripts\") pod \"nova-api-ab80-account-create-update-j58px\" (UID: \"4c368d56-bebc-433e-8519-9b8ab1ef51a4\") " pod="openstack/nova-api-ab80-account-create-update-j58px" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.314350 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/882ac903-0c5a-48c6-977d-645792363692-operator-scripts\") pod \"nova-cell0-db-create-8vhcb\" (UID: \"882ac903-0c5a-48c6-977d-645792363692\") " pod="openstack/nova-cell0-db-create-8vhcb" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.333224 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.333201428 podStartE2EDuration="3.333201428s" podCreationTimestamp="2026-02-23 10:26:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:26:47.226772197 +0000 UTC m=+1240.647145710" watchObservedRunningTime="2026-02-23 10:26:47.333201428 +0000 UTC m=+1240.753574941" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.341092 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.341074802 podStartE2EDuration="4.341074802s" podCreationTimestamp="2026-02-23 10:26:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:26:47.250091701 +0000 UTC m=+1240.670465214" watchObservedRunningTime="2026-02-23 10:26:47.341074802 +0000 UTC m=+1240.761448315" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.362788 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-8f5a-account-create-update-q5hlx"] Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.364473 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8f5a-account-create-update-q5hlx" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.374056 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.383850 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-xgx5c"] Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.385768 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xgx5c" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.393556 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8f5a-account-create-update-q5hlx"] Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.444387 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49f85bef-7ffe-4784-8ba2-3bf8c6762e17-operator-scripts\") pod \"nova-cell1-db-create-xgx5c\" (UID: \"49f85bef-7ffe-4784-8ba2-3bf8c6762e17\") " pod="openstack/nova-cell1-db-create-xgx5c" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.444448 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gsh9\" (UniqueName: \"kubernetes.io/projected/ca88f3bb-29c1-41ef-8355-f9c52d62438a-kube-api-access-9gsh9\") pod \"nova-cell0-8f5a-account-create-update-q5hlx\" (UID: \"ca88f3bb-29c1-41ef-8355-f9c52d62438a\") " pod="openstack/nova-cell0-8f5a-account-create-update-q5hlx" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.444531 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bdzr\" (UniqueName: \"kubernetes.io/projected/49f85bef-7ffe-4784-8ba2-3bf8c6762e17-kube-api-access-5bdzr\") pod \"nova-cell1-db-create-xgx5c\" (UID: \"49f85bef-7ffe-4784-8ba2-3bf8c6762e17\") " pod="openstack/nova-cell1-db-create-xgx5c" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.444589 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca88f3bb-29c1-41ef-8355-f9c52d62438a-operator-scripts\") pod \"nova-cell0-8f5a-account-create-update-q5hlx\" (UID: \"ca88f3bb-29c1-41ef-8355-f9c52d62438a\") " pod="openstack/nova-cell0-8f5a-account-create-update-q5hlx" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.468422 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfsbj\" (UniqueName: \"kubernetes.io/projected/4c368d56-bebc-433e-8519-9b8ab1ef51a4-kube-api-access-zfsbj\") pod \"nova-api-ab80-account-create-update-j58px\" (UID: \"4c368d56-bebc-433e-8519-9b8ab1ef51a4\") " pod="openstack/nova-api-ab80-account-create-update-j58px" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.475215 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ab80-account-create-update-j58px" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.506237 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xgx5c"] Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.548255 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca88f3bb-29c1-41ef-8355-f9c52d62438a-operator-scripts\") pod \"nova-cell0-8f5a-account-create-update-q5hlx\" (UID: \"ca88f3bb-29c1-41ef-8355-f9c52d62438a\") " pod="openstack/nova-cell0-8f5a-account-create-update-q5hlx" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.548664 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49f85bef-7ffe-4784-8ba2-3bf8c6762e17-operator-scripts\") pod \"nova-cell1-db-create-xgx5c\" (UID: \"49f85bef-7ffe-4784-8ba2-3bf8c6762e17\") " pod="openstack/nova-cell1-db-create-xgx5c" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.548701 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gsh9\" (UniqueName: \"kubernetes.io/projected/ca88f3bb-29c1-41ef-8355-f9c52d62438a-kube-api-access-9gsh9\") pod \"nova-cell0-8f5a-account-create-update-q5hlx\" (UID: \"ca88f3bb-29c1-41ef-8355-f9c52d62438a\") " pod="openstack/nova-cell0-8f5a-account-create-update-q5hlx" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.559446 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bdzr\" (UniqueName: \"kubernetes.io/projected/49f85bef-7ffe-4784-8ba2-3bf8c6762e17-kube-api-access-5bdzr\") pod \"nova-cell1-db-create-xgx5c\" (UID: \"49f85bef-7ffe-4784-8ba2-3bf8c6762e17\") " pod="openstack/nova-cell1-db-create-xgx5c" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.551339 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49f85bef-7ffe-4784-8ba2-3bf8c6762e17-operator-scripts\") pod \"nova-cell1-db-create-xgx5c\" (UID: \"49f85bef-7ffe-4784-8ba2-3bf8c6762e17\") " pod="openstack/nova-cell1-db-create-xgx5c" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.556014 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5h87\" (UniqueName: \"kubernetes.io/projected/882ac903-0c5a-48c6-977d-645792363692-kube-api-access-p5h87\") pod \"nova-cell0-db-create-8vhcb\" (UID: \"882ac903-0c5a-48c6-977d-645792363692\") " pod="openstack/nova-cell0-db-create-8vhcb" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.550470 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca88f3bb-29c1-41ef-8355-f9c52d62438a-operator-scripts\") pod \"nova-cell0-8f5a-account-create-update-q5hlx\" (UID: \"ca88f3bb-29c1-41ef-8355-f9c52d62438a\") " pod="openstack/nova-cell0-8f5a-account-create-update-q5hlx" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.607754 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bdzr\" (UniqueName: \"kubernetes.io/projected/49f85bef-7ffe-4784-8ba2-3bf8c6762e17-kube-api-access-5bdzr\") pod \"nova-cell1-db-create-xgx5c\" (UID: \"49f85bef-7ffe-4784-8ba2-3bf8c6762e17\") " pod="openstack/nova-cell1-db-create-xgx5c" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.618266 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gsh9\" (UniqueName: \"kubernetes.io/projected/ca88f3bb-29c1-41ef-8355-f9c52d62438a-kube-api-access-9gsh9\") pod \"nova-cell0-8f5a-account-create-update-q5hlx\" (UID: \"ca88f3bb-29c1-41ef-8355-f9c52d62438a\") " pod="openstack/nova-cell0-8f5a-account-create-update-q5hlx" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.698896 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-eeda-account-create-update-4g7sc"] Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.700579 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-eeda-account-create-update-4g7sc" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.703091 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.728683 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-eeda-account-create-update-4g7sc"] Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.759426 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8vhcb" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.768380 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0650495a-8169-4d67-b016-f52cb76911b8-operator-scripts\") pod \"nova-cell1-eeda-account-create-update-4g7sc\" (UID: \"0650495a-8169-4d67-b016-f52cb76911b8\") " pod="openstack/nova-cell1-eeda-account-create-update-4g7sc" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.768538 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw66m\" (UniqueName: \"kubernetes.io/projected/0650495a-8169-4d67-b016-f52cb76911b8-kube-api-access-fw66m\") pod \"nova-cell1-eeda-account-create-update-4g7sc\" (UID: \"0650495a-8169-4d67-b016-f52cb76911b8\") " pod="openstack/nova-cell1-eeda-account-create-update-4g7sc" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.845875 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8f5a-account-create-update-q5hlx" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.890192 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xgx5c" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.891267 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw66m\" (UniqueName: \"kubernetes.io/projected/0650495a-8169-4d67-b016-f52cb76911b8-kube-api-access-fw66m\") pod \"nova-cell1-eeda-account-create-update-4g7sc\" (UID: \"0650495a-8169-4d67-b016-f52cb76911b8\") " pod="openstack/nova-cell1-eeda-account-create-update-4g7sc" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.891404 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0650495a-8169-4d67-b016-f52cb76911b8-operator-scripts\") pod \"nova-cell1-eeda-account-create-update-4g7sc\" (UID: \"0650495a-8169-4d67-b016-f52cb76911b8\") " pod="openstack/nova-cell1-eeda-account-create-update-4g7sc" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.892267 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0650495a-8169-4d67-b016-f52cb76911b8-operator-scripts\") pod \"nova-cell1-eeda-account-create-update-4g7sc\" (UID: \"0650495a-8169-4d67-b016-f52cb76911b8\") " pod="openstack/nova-cell1-eeda-account-create-update-4g7sc" Feb 23 10:26:47 crc kubenswrapper[4904]: I0223 10:26:47.935060 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw66m\" (UniqueName: \"kubernetes.io/projected/0650495a-8169-4d67-b016-f52cb76911b8-kube-api-access-fw66m\") pod \"nova-cell1-eeda-account-create-update-4g7sc\" (UID: \"0650495a-8169-4d67-b016-f52cb76911b8\") " pod="openstack/nova-cell1-eeda-account-create-update-4g7sc" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.001825 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.096661 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-scripts\") pod \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.097647 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1f8a283-bc3b-4cd4-ab91-244942a44e58-logs\") pod \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.097834 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-internal-tls-certs\") pod \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.097916 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-public-tls-certs\") pod \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.097995 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-config-data\") pod \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.098118 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-combined-ca-bundle\") pod \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.098833 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9277l\" (UniqueName: \"kubernetes.io/projected/a1f8a283-bc3b-4cd4-ab91-244942a44e58-kube-api-access-9277l\") pod \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\" (UID: \"a1f8a283-bc3b-4cd4-ab91-244942a44e58\") " Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.101459 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1f8a283-bc3b-4cd4-ab91-244942a44e58-logs" (OuterVolumeSpecName: "logs") pod "a1f8a283-bc3b-4cd4-ab91-244942a44e58" (UID: "a1f8a283-bc3b-4cd4-ab91-244942a44e58"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.102663 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1f8a283-bc3b-4cd4-ab91-244942a44e58-logs\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.111533 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1f8a283-bc3b-4cd4-ab91-244942a44e58-kube-api-access-9277l" (OuterVolumeSpecName: "kube-api-access-9277l") pod "a1f8a283-bc3b-4cd4-ab91-244942a44e58" (UID: "a1f8a283-bc3b-4cd4-ab91-244942a44e58"). InnerVolumeSpecName "kube-api-access-9277l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.119294 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-scripts" (OuterVolumeSpecName: "scripts") pod "a1f8a283-bc3b-4cd4-ab91-244942a44e58" (UID: "a1f8a283-bc3b-4cd4-ab91-244942a44e58"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.191959 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1f8a283-bc3b-4cd4-ab91-244942a44e58" (UID: "a1f8a283-bc3b-4cd4-ab91-244942a44e58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.208889 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9277l\" (UniqueName: \"kubernetes.io/projected/a1f8a283-bc3b-4cd4-ab91-244942a44e58-kube-api-access-9277l\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.208916 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.208925 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.229608 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a1f8a283-bc3b-4cd4-ab91-244942a44e58" (UID: "a1f8a283-bc3b-4cd4-ab91-244942a44e58"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.233668 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-eeda-account-create-update-4g7sc" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.240292 4904 generic.go:334] "Generic (PLEG): container finished" podID="a1f8a283-bc3b-4cd4-ab91-244942a44e58" containerID="ebcb92569aba0fb0c56f61c9a6b39976ab055ee18b79386a345bd5d25bc94f4e" exitCode=0 Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.240368 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f8c7c9fd4-2449r" event={"ID":"a1f8a283-bc3b-4cd4-ab91-244942a44e58","Type":"ContainerDied","Data":"ebcb92569aba0fb0c56f61c9a6b39976ab055ee18b79386a345bd5d25bc94f4e"} Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.240401 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f8c7c9fd4-2449r" event={"ID":"a1f8a283-bc3b-4cd4-ab91-244942a44e58","Type":"ContainerDied","Data":"1430355315af8a0c13d57587710cdaa31b1d5f3bbec3c38bafed7b38f5776ef8"} Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.240420 4904 scope.go:117] "RemoveContainer" containerID="ebcb92569aba0fb0c56f61c9a6b39976ab055ee18b79386a345bd5d25bc94f4e" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.240665 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f8c7c9fd4-2449r" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.259356 4904 generic.go:334] "Generic (PLEG): container finished" podID="16c88a53-6a67-457c-9cce-5fd72203ca30" containerID="33d0abadfeed8f191cd4e6b8bd96fedb35d9389fe8aa1a5686f8cd4c2edf0bf6" exitCode=137 Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.259405 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56599cf886-x6z6x" event={"ID":"16c88a53-6a67-457c-9cce-5fd72203ca30","Type":"ContainerDied","Data":"33d0abadfeed8f191cd4e6b8bd96fedb35d9389fe8aa1a5686f8cd4c2edf0bf6"} Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.279263 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.294643 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-config-data" (OuterVolumeSpecName: "config-data") pod "a1f8a283-bc3b-4cd4-ab91-244942a44e58" (UID: "a1f8a283-bc3b-4cd4-ab91-244942a44e58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.308591 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-d55dc77cc-gg7pn" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.311106 4904 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.311124 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.321333 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a1f8a283-bc3b-4cd4-ab91-244942a44e58" (UID: "a1f8a283-bc3b-4cd4-ab91-244942a44e58"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.413243 4904 scope.go:117] "RemoveContainer" containerID="f3338b2973d059c8719646d74e13e18b08a68e9836148c28b867fbefa7e910da" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.415607 4904 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f8a283-bc3b-4cd4-ab91-244942a44e58-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.460071 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7bvr2"] Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.485455 4904 scope.go:117] "RemoveContainer" containerID="ebcb92569aba0fb0c56f61c9a6b39976ab055ee18b79386a345bd5d25bc94f4e" Feb 23 10:26:48 crc kubenswrapper[4904]: E0223 10:26:48.489095 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebcb92569aba0fb0c56f61c9a6b39976ab055ee18b79386a345bd5d25bc94f4e\": container with ID starting with ebcb92569aba0fb0c56f61c9a6b39976ab055ee18b79386a345bd5d25bc94f4e not found: ID does not exist" containerID="ebcb92569aba0fb0c56f61c9a6b39976ab055ee18b79386a345bd5d25bc94f4e" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.489144 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebcb92569aba0fb0c56f61c9a6b39976ab055ee18b79386a345bd5d25bc94f4e"} err="failed to get container status \"ebcb92569aba0fb0c56f61c9a6b39976ab055ee18b79386a345bd5d25bc94f4e\": rpc error: code = NotFound desc = could not find container \"ebcb92569aba0fb0c56f61c9a6b39976ab055ee18b79386a345bd5d25bc94f4e\": container with ID starting with ebcb92569aba0fb0c56f61c9a6b39976ab055ee18b79386a345bd5d25bc94f4e not found: ID does not exist" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.489172 4904 scope.go:117] "RemoveContainer" containerID="f3338b2973d059c8719646d74e13e18b08a68e9836148c28b867fbefa7e910da" Feb 23 10:26:48 crc kubenswrapper[4904]: E0223 10:26:48.491761 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3338b2973d059c8719646d74e13e18b08a68e9836148c28b867fbefa7e910da\": container with ID starting with f3338b2973d059c8719646d74e13e18b08a68e9836148c28b867fbefa7e910da not found: ID does not exist" containerID="f3338b2973d059c8719646d74e13e18b08a68e9836148c28b867fbefa7e910da" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.491795 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3338b2973d059c8719646d74e13e18b08a68e9836148c28b867fbefa7e910da"} err="failed to get container status \"f3338b2973d059c8719646d74e13e18b08a68e9836148c28b867fbefa7e910da\": rpc error: code = NotFound desc = could not find container \"f3338b2973d059c8719646d74e13e18b08a68e9836148c28b867fbefa7e910da\": container with ID starting with f3338b2973d059c8719646d74e13e18b08a68e9836148c28b867fbefa7e910da not found: ID does not exist" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.555148 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.625436 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16c88a53-6a67-457c-9cce-5fd72203ca30-scripts\") pod \"16c88a53-6a67-457c-9cce-5fd72203ca30\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.626842 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwsbf\" (UniqueName: \"kubernetes.io/projected/16c88a53-6a67-457c-9cce-5fd72203ca30-kube-api-access-dwsbf\") pod \"16c88a53-6a67-457c-9cce-5fd72203ca30\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.626899 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/16c88a53-6a67-457c-9cce-5fd72203ca30-config-data\") pod \"16c88a53-6a67-457c-9cce-5fd72203ca30\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.626920 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16c88a53-6a67-457c-9cce-5fd72203ca30-logs\") pod \"16c88a53-6a67-457c-9cce-5fd72203ca30\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.627103 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/16c88a53-6a67-457c-9cce-5fd72203ca30-horizon-secret-key\") pod \"16c88a53-6a67-457c-9cce-5fd72203ca30\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.627154 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/16c88a53-6a67-457c-9cce-5fd72203ca30-horizon-tls-certs\") pod \"16c88a53-6a67-457c-9cce-5fd72203ca30\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.627214 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c88a53-6a67-457c-9cce-5fd72203ca30-combined-ca-bundle\") pod \"16c88a53-6a67-457c-9cce-5fd72203ca30\" (UID: \"16c88a53-6a67-457c-9cce-5fd72203ca30\") " Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.628210 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16c88a53-6a67-457c-9cce-5fd72203ca30-logs" (OuterVolumeSpecName: "logs") pod "16c88a53-6a67-457c-9cce-5fd72203ca30" (UID: "16c88a53-6a67-457c-9cce-5fd72203ca30"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.635920 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c88a53-6a67-457c-9cce-5fd72203ca30-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "16c88a53-6a67-457c-9cce-5fd72203ca30" (UID: "16c88a53-6a67-457c-9cce-5fd72203ca30"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.638185 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ab80-account-create-update-j58px"] Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.646963 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16c88a53-6a67-457c-9cce-5fd72203ca30-kube-api-access-dwsbf" (OuterVolumeSpecName: "kube-api-access-dwsbf") pod "16c88a53-6a67-457c-9cce-5fd72203ca30" (UID: "16c88a53-6a67-457c-9cce-5fd72203ca30"). InnerVolumeSpecName "kube-api-access-dwsbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.662974 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f8c7c9fd4-2449r"] Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.672466 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-f8c7c9fd4-2449r"] Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.675454 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c88a53-6a67-457c-9cce-5fd72203ca30-config-data" (OuterVolumeSpecName: "config-data") pod "16c88a53-6a67-457c-9cce-5fd72203ca30" (UID: "16c88a53-6a67-457c-9cce-5fd72203ca30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.696882 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c88a53-6a67-457c-9cce-5fd72203ca30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16c88a53-6a67-457c-9cce-5fd72203ca30" (UID: "16c88a53-6a67-457c-9cce-5fd72203ca30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.733741 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwsbf\" (UniqueName: \"kubernetes.io/projected/16c88a53-6a67-457c-9cce-5fd72203ca30-kube-api-access-dwsbf\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.733772 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/16c88a53-6a67-457c-9cce-5fd72203ca30-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.733782 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16c88a53-6a67-457c-9cce-5fd72203ca30-logs\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.733814 4904 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/16c88a53-6a67-457c-9cce-5fd72203ca30-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.733827 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c88a53-6a67-457c-9cce-5fd72203ca30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.746686 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c88a53-6a67-457c-9cce-5fd72203ca30-scripts" (OuterVolumeSpecName: "scripts") pod "16c88a53-6a67-457c-9cce-5fd72203ca30" (UID: "16c88a53-6a67-457c-9cce-5fd72203ca30"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.798271 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c88a53-6a67-457c-9cce-5fd72203ca30-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "16c88a53-6a67-457c-9cce-5fd72203ca30" (UID: "16c88a53-6a67-457c-9cce-5fd72203ca30"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.819172 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8f5a-account-create-update-q5hlx"] Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.836704 4904 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/16c88a53-6a67-457c-9cce-5fd72203ca30-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.836749 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16c88a53-6a67-457c-9cce-5fd72203ca30-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:48 crc kubenswrapper[4904]: W0223 10:26:48.849621 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca88f3bb_29c1_41ef_8355_f9c52d62438a.slice/crio-34b1bac7114bc6b64705efcaf7a520c43929be3e0ceccc1d8c0c010ff03b92a5 WatchSource:0}: Error finding container 34b1bac7114bc6b64705efcaf7a520c43929be3e0ceccc1d8c0c010ff03b92a5: Status 404 returned error can't find the container with id 34b1bac7114bc6b64705efcaf7a520c43929be3e0ceccc1d8c0c010ff03b92a5 Feb 23 10:26:48 crc kubenswrapper[4904]: I0223 10:26:48.993876 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8vhcb"] Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.020668 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xgx5c"] Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.071558 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-eeda-account-create-update-4g7sc"] Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.268978 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1f8a283-bc3b-4cd4-ab91-244942a44e58" path="/var/lib/kubelet/pods/a1f8a283-bc3b-4cd4-ab91-244942a44e58/volumes" Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.280418 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2d1c60a-e612-442b-9c87-28262f0fcde6","Type":"ContainerStarted","Data":"d87c51e9b54e2073525ab467a81175d147b5890affcb3be45042e11bdcdf9b9a"} Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.280835 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.291549 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ab80-account-create-update-j58px" event={"ID":"4c368d56-bebc-433e-8519-9b8ab1ef51a4","Type":"ContainerStarted","Data":"3ca5f15ceacdc0b49538001afaf0255d5b4d7ab4c67657de7daa976c565130e7"} Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.291597 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ab80-account-create-update-j58px" event={"ID":"4c368d56-bebc-433e-8519-9b8ab1ef51a4","Type":"ContainerStarted","Data":"7afc19f4f30ae6cbfed349d50e646cce79c557e3810aba1220be2cae75ebe062"} Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.296912 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xgx5c" event={"ID":"49f85bef-7ffe-4784-8ba2-3bf8c6762e17","Type":"ContainerStarted","Data":"9f30741bcc6b8228dcb938823dec25f2af5cf3bdb13bd2931e4af9558398f068"} Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.309616 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.238146955 podStartE2EDuration="6.309590671s" podCreationTimestamp="2026-02-23 10:26:43 +0000 UTC" firstStartedPulling="2026-02-23 10:26:44.468610527 +0000 UTC m=+1237.888984040" lastFinishedPulling="2026-02-23 10:26:48.540054243 +0000 UTC m=+1241.960427756" observedRunningTime="2026-02-23 10:26:49.303508087 +0000 UTC m=+1242.723881620" watchObservedRunningTime="2026-02-23 10:26:49.309590671 +0000 UTC m=+1242.729964184" Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.319058 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7bvr2" event={"ID":"28443614-6612-4fc3-9043-782b2175ddb3","Type":"ContainerStarted","Data":"f6e1341e604eab099d266f1eee7d0799cd7bd22238a63d2e1793bc32284a8d8b"} Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.319132 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7bvr2" event={"ID":"28443614-6612-4fc3-9043-782b2175ddb3","Type":"ContainerStarted","Data":"330dd4612cb3d2e2bca9c23fde9326c71bd163bc474e34c8bc010b1138a8f3a5"} Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.332051 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8f5a-account-create-update-q5hlx" event={"ID":"ca88f3bb-29c1-41ef-8355-f9c52d62438a","Type":"ContainerStarted","Data":"31359bc5018f2fe1d1eabf2ef89cec97248604ce8de611d76530e49c4778ff0f"} Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.332113 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8f5a-account-create-update-q5hlx" event={"ID":"ca88f3bb-29c1-41ef-8355-f9c52d62438a","Type":"ContainerStarted","Data":"34b1bac7114bc6b64705efcaf7a520c43929be3e0ceccc1d8c0c010ff03b92a5"} Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.339373 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-ab80-account-create-update-j58px" podStartSLOduration=2.339352588 podStartE2EDuration="2.339352588s" podCreationTimestamp="2026-02-23 10:26:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:26:49.333697377 +0000 UTC m=+1242.754070890" watchObservedRunningTime="2026-02-23 10:26:49.339352588 +0000 UTC m=+1242.759726101" Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.341971 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56599cf886-x6z6x" event={"ID":"16c88a53-6a67-457c-9cce-5fd72203ca30","Type":"ContainerDied","Data":"3861de26a4358bf4781849ee5ca536b4cd82a4e7fdab8927f9dd12117bacde93"} Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.342040 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56599cf886-x6z6x" Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.342051 4904 scope.go:117] "RemoveContainer" containerID="e9caa45b5573448893f7c5e50da11c9e9c9b9b0d37fab7d7d8fa006cfc467548" Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.347706 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8vhcb" event={"ID":"882ac903-0c5a-48c6-977d-645792363692","Type":"ContainerStarted","Data":"a429aaff0adc9dc09974d208064769742fb47e117fcea192a223b2031e45f9be"} Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.357109 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-eeda-account-create-update-4g7sc" event={"ID":"0650495a-8169-4d67-b016-f52cb76911b8","Type":"ContainerStarted","Data":"b1286e52cf71198aa18caec27a4d0772b7c2028fdd24de91e91fef54aba058ae"} Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.386037 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-8f5a-account-create-update-q5hlx" podStartSLOduration=2.386011697 podStartE2EDuration="2.386011697s" podCreationTimestamp="2026-02-23 10:26:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:26:49.355136308 +0000 UTC m=+1242.775509821" watchObservedRunningTime="2026-02-23 10:26:49.386011697 +0000 UTC m=+1242.806385210" Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.416327 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-7bvr2" podStartSLOduration=3.41630566 podStartE2EDuration="3.41630566s" podCreationTimestamp="2026-02-23 10:26:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:26:49.378679268 +0000 UTC m=+1242.799052781" watchObservedRunningTime="2026-02-23 10:26:49.41630566 +0000 UTC m=+1242.836679173" Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.446241 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-8vhcb" podStartSLOduration=2.446214252 podStartE2EDuration="2.446214252s" podCreationTimestamp="2026-02-23 10:26:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:26:49.401140938 +0000 UTC m=+1242.821514451" watchObservedRunningTime="2026-02-23 10:26:49.446214252 +0000 UTC m=+1242.866587755" Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.492699 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56599cf886-x6z6x"] Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.515590 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-56599cf886-x6z6x"] Feb 23 10:26:49 crc kubenswrapper[4904]: I0223 10:26:49.573097 4904 scope.go:117] "RemoveContainer" containerID="33d0abadfeed8f191cd4e6b8bd96fedb35d9389fe8aa1a5686f8cd4c2edf0bf6" Feb 23 10:26:50 crc kubenswrapper[4904]: I0223 10:26:50.369439 4904 generic.go:334] "Generic (PLEG): container finished" podID="49f85bef-7ffe-4784-8ba2-3bf8c6762e17" containerID="faca134c21005630ea687c2e78a51f0b353b6469036a4874d7a7be65fb15caf3" exitCode=0 Feb 23 10:26:50 crc kubenswrapper[4904]: I0223 10:26:50.369600 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xgx5c" event={"ID":"49f85bef-7ffe-4784-8ba2-3bf8c6762e17","Type":"ContainerDied","Data":"faca134c21005630ea687c2e78a51f0b353b6469036a4874d7a7be65fb15caf3"} Feb 23 10:26:50 crc kubenswrapper[4904]: I0223 10:26:50.372687 4904 generic.go:334] "Generic (PLEG): container finished" podID="ca88f3bb-29c1-41ef-8355-f9c52d62438a" containerID="31359bc5018f2fe1d1eabf2ef89cec97248604ce8de611d76530e49c4778ff0f" exitCode=0 Feb 23 10:26:50 crc kubenswrapper[4904]: I0223 10:26:50.372760 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8f5a-account-create-update-q5hlx" event={"ID":"ca88f3bb-29c1-41ef-8355-f9c52d62438a","Type":"ContainerDied","Data":"31359bc5018f2fe1d1eabf2ef89cec97248604ce8de611d76530e49c4778ff0f"} Feb 23 10:26:50 crc kubenswrapper[4904]: I0223 10:26:50.376443 4904 generic.go:334] "Generic (PLEG): container finished" podID="882ac903-0c5a-48c6-977d-645792363692" containerID="29e2bb6a8f663b61ff4c1afdd97853563a57cda1a69ff975a010bd7248585506" exitCode=0 Feb 23 10:26:50 crc kubenswrapper[4904]: I0223 10:26:50.376512 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8vhcb" event={"ID":"882ac903-0c5a-48c6-977d-645792363692","Type":"ContainerDied","Data":"29e2bb6a8f663b61ff4c1afdd97853563a57cda1a69ff975a010bd7248585506"} Feb 23 10:26:50 crc kubenswrapper[4904]: I0223 10:26:50.378388 4904 generic.go:334] "Generic (PLEG): container finished" podID="0650495a-8169-4d67-b016-f52cb76911b8" containerID="a7e6dc5b560a690894c4f234267eaaa3e8260db08bc78c71cd6efc69ec379d05" exitCode=0 Feb 23 10:26:50 crc kubenswrapper[4904]: I0223 10:26:50.378456 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-eeda-account-create-update-4g7sc" event={"ID":"0650495a-8169-4d67-b016-f52cb76911b8","Type":"ContainerDied","Data":"a7e6dc5b560a690894c4f234267eaaa3e8260db08bc78c71cd6efc69ec379d05"} Feb 23 10:26:50 crc kubenswrapper[4904]: I0223 10:26:50.382199 4904 generic.go:334] "Generic (PLEG): container finished" podID="28443614-6612-4fc3-9043-782b2175ddb3" containerID="f6e1341e604eab099d266f1eee7d0799cd7bd22238a63d2e1793bc32284a8d8b" exitCode=0 Feb 23 10:26:50 crc kubenswrapper[4904]: I0223 10:26:50.382230 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7bvr2" event={"ID":"28443614-6612-4fc3-9043-782b2175ddb3","Type":"ContainerDied","Data":"f6e1341e604eab099d266f1eee7d0799cd7bd22238a63d2e1793bc32284a8d8b"} Feb 23 10:26:50 crc kubenswrapper[4904]: I0223 10:26:50.384761 4904 generic.go:334] "Generic (PLEG): container finished" podID="4c368d56-bebc-433e-8519-9b8ab1ef51a4" containerID="3ca5f15ceacdc0b49538001afaf0255d5b4d7ab4c67657de7daa976c565130e7" exitCode=0 Feb 23 10:26:50 crc kubenswrapper[4904]: I0223 10:26:50.386131 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ab80-account-create-update-j58px" event={"ID":"4c368d56-bebc-433e-8519-9b8ab1ef51a4","Type":"ContainerDied","Data":"3ca5f15ceacdc0b49538001afaf0255d5b4d7ab4c67657de7daa976c565130e7"} Feb 23 10:26:50 crc kubenswrapper[4904]: I0223 10:26:50.917605 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 23 10:26:50 crc kubenswrapper[4904]: I0223 10:26:50.918111 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="f53ac71c-9251-491f-8c96-da8a2b408b48" containerName="watcher-decision-engine" containerID="cri-o://95c7cf082496dd9eb45b4f0cc9a60cfbf0718c5f4d719cc1184ff68acaa582d4" gracePeriod=30 Feb 23 10:26:50 crc kubenswrapper[4904]: I0223 10:26:50.973889 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:26:51 crc kubenswrapper[4904]: I0223 10:26:51.269875 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16c88a53-6a67-457c-9cce-5fd72203ca30" path="/var/lib/kubelet/pods/16c88a53-6a67-457c-9cce-5fd72203ca30/volumes" Feb 23 10:26:51 crc kubenswrapper[4904]: I0223 10:26:51.398003 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2d1c60a-e612-442b-9c87-28262f0fcde6" containerName="ceilometer-central-agent" containerID="cri-o://5876f58320ba2fda9d5a3bde5a784d7eea607299efab8816380aac2c39de4f74" gracePeriod=30 Feb 23 10:26:51 crc kubenswrapper[4904]: I0223 10:26:51.398065 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2d1c60a-e612-442b-9c87-28262f0fcde6" containerName="ceilometer-notification-agent" containerID="cri-o://964e6f5458863f11647447362d607d4fb4a15dda069cbd90a2e7693f5c49b51f" gracePeriod=30 Feb 23 10:26:51 crc kubenswrapper[4904]: I0223 10:26:51.398113 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2d1c60a-e612-442b-9c87-28262f0fcde6" containerName="proxy-httpd" containerID="cri-o://d87c51e9b54e2073525ab467a81175d147b5890affcb3be45042e11bdcdf9b9a" gracePeriod=30 Feb 23 10:26:51 crc kubenswrapper[4904]: I0223 10:26:51.398309 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2d1c60a-e612-442b-9c87-28262f0fcde6" containerName="sg-core" containerID="cri-o://dd2f5440f82603511f11622af292f0ed0d1a0833269abd220b84be0b3771841e" gracePeriod=30 Feb 23 10:26:51 crc kubenswrapper[4904]: I0223 10:26:51.919494 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-eeda-account-create-update-4g7sc" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.049133 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw66m\" (UniqueName: \"kubernetes.io/projected/0650495a-8169-4d67-b016-f52cb76911b8-kube-api-access-fw66m\") pod \"0650495a-8169-4d67-b016-f52cb76911b8\" (UID: \"0650495a-8169-4d67-b016-f52cb76911b8\") " Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.049255 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0650495a-8169-4d67-b016-f52cb76911b8-operator-scripts\") pod \"0650495a-8169-4d67-b016-f52cb76911b8\" (UID: \"0650495a-8169-4d67-b016-f52cb76911b8\") " Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.050692 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0650495a-8169-4d67-b016-f52cb76911b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0650495a-8169-4d67-b016-f52cb76911b8" (UID: "0650495a-8169-4d67-b016-f52cb76911b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.064103 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0650495a-8169-4d67-b016-f52cb76911b8-kube-api-access-fw66m" (OuterVolumeSpecName: "kube-api-access-fw66m") pod "0650495a-8169-4d67-b016-f52cb76911b8" (UID: "0650495a-8169-4d67-b016-f52cb76911b8"). InnerVolumeSpecName "kube-api-access-fw66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.147090 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ab80-account-create-update-j58px" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.151782 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw66m\" (UniqueName: \"kubernetes.io/projected/0650495a-8169-4d67-b016-f52cb76911b8-kube-api-access-fw66m\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.151811 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0650495a-8169-4d67-b016-f52cb76911b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.156730 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xgx5c" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.183916 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8vhcb" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.193272 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8f5a-account-create-update-q5hlx" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.215832 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7bvr2" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.252922 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49f85bef-7ffe-4784-8ba2-3bf8c6762e17-operator-scripts\") pod \"49f85bef-7ffe-4784-8ba2-3bf8c6762e17\" (UID: \"49f85bef-7ffe-4784-8ba2-3bf8c6762e17\") " Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.253171 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bdzr\" (UniqueName: \"kubernetes.io/projected/49f85bef-7ffe-4784-8ba2-3bf8c6762e17-kube-api-access-5bdzr\") pod \"49f85bef-7ffe-4784-8ba2-3bf8c6762e17\" (UID: \"49f85bef-7ffe-4784-8ba2-3bf8c6762e17\") " Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.253289 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5h87\" (UniqueName: \"kubernetes.io/projected/882ac903-0c5a-48c6-977d-645792363692-kube-api-access-p5h87\") pod \"882ac903-0c5a-48c6-977d-645792363692\" (UID: \"882ac903-0c5a-48c6-977d-645792363692\") " Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.253343 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c368d56-bebc-433e-8519-9b8ab1ef51a4-operator-scripts\") pod \"4c368d56-bebc-433e-8519-9b8ab1ef51a4\" (UID: \"4c368d56-bebc-433e-8519-9b8ab1ef51a4\") " Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.253380 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/882ac903-0c5a-48c6-977d-645792363692-operator-scripts\") pod \"882ac903-0c5a-48c6-977d-645792363692\" (UID: \"882ac903-0c5a-48c6-977d-645792363692\") " Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.253414 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfsbj\" (UniqueName: \"kubernetes.io/projected/4c368d56-bebc-433e-8519-9b8ab1ef51a4-kube-api-access-zfsbj\") pod \"4c368d56-bebc-433e-8519-9b8ab1ef51a4\" (UID: \"4c368d56-bebc-433e-8519-9b8ab1ef51a4\") " Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.259045 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c368d56-bebc-433e-8519-9b8ab1ef51a4-kube-api-access-zfsbj" (OuterVolumeSpecName: "kube-api-access-zfsbj") pod "4c368d56-bebc-433e-8519-9b8ab1ef51a4" (UID: "4c368d56-bebc-433e-8519-9b8ab1ef51a4"). InnerVolumeSpecName "kube-api-access-zfsbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.259603 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49f85bef-7ffe-4784-8ba2-3bf8c6762e17-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "49f85bef-7ffe-4784-8ba2-3bf8c6762e17" (UID: "49f85bef-7ffe-4784-8ba2-3bf8c6762e17"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.261186 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/882ac903-0c5a-48c6-977d-645792363692-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "882ac903-0c5a-48c6-977d-645792363692" (UID: "882ac903-0c5a-48c6-977d-645792363692"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.263246 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c368d56-bebc-433e-8519-9b8ab1ef51a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c368d56-bebc-433e-8519-9b8ab1ef51a4" (UID: "4c368d56-bebc-433e-8519-9b8ab1ef51a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.263928 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49f85bef-7ffe-4784-8ba2-3bf8c6762e17-kube-api-access-5bdzr" (OuterVolumeSpecName: "kube-api-access-5bdzr") pod "49f85bef-7ffe-4784-8ba2-3bf8c6762e17" (UID: "49f85bef-7ffe-4784-8ba2-3bf8c6762e17"). InnerVolumeSpecName "kube-api-access-5bdzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.271658 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/882ac903-0c5a-48c6-977d-645792363692-kube-api-access-p5h87" (OuterVolumeSpecName: "kube-api-access-p5h87") pod "882ac903-0c5a-48c6-977d-645792363692" (UID: "882ac903-0c5a-48c6-977d-645792363692"). InnerVolumeSpecName "kube-api-access-p5h87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.355233 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gsh9\" (UniqueName: \"kubernetes.io/projected/ca88f3bb-29c1-41ef-8355-f9c52d62438a-kube-api-access-9gsh9\") pod \"ca88f3bb-29c1-41ef-8355-f9c52d62438a\" (UID: \"ca88f3bb-29c1-41ef-8355-f9c52d62438a\") " Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.355327 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28443614-6612-4fc3-9043-782b2175ddb3-operator-scripts\") pod \"28443614-6612-4fc3-9043-782b2175ddb3\" (UID: \"28443614-6612-4fc3-9043-782b2175ddb3\") " Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.355389 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca88f3bb-29c1-41ef-8355-f9c52d62438a-operator-scripts\") pod \"ca88f3bb-29c1-41ef-8355-f9c52d62438a\" (UID: \"ca88f3bb-29c1-41ef-8355-f9c52d62438a\") " Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.355646 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-865gr\" (UniqueName: \"kubernetes.io/projected/28443614-6612-4fc3-9043-782b2175ddb3-kube-api-access-865gr\") pod \"28443614-6612-4fc3-9043-782b2175ddb3\" (UID: \"28443614-6612-4fc3-9043-782b2175ddb3\") " Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.355839 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28443614-6612-4fc3-9043-782b2175ddb3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "28443614-6612-4fc3-9043-782b2175ddb3" (UID: "28443614-6612-4fc3-9043-782b2175ddb3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.356540 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bdzr\" (UniqueName: \"kubernetes.io/projected/49f85bef-7ffe-4784-8ba2-3bf8c6762e17-kube-api-access-5bdzr\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.356567 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5h87\" (UniqueName: \"kubernetes.io/projected/882ac903-0c5a-48c6-977d-645792363692-kube-api-access-p5h87\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.356580 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28443614-6612-4fc3-9043-782b2175ddb3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.356593 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c368d56-bebc-433e-8519-9b8ab1ef51a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.356605 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/882ac903-0c5a-48c6-977d-645792363692-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.356617 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfsbj\" (UniqueName: \"kubernetes.io/projected/4c368d56-bebc-433e-8519-9b8ab1ef51a4-kube-api-access-zfsbj\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.356628 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49f85bef-7ffe-4784-8ba2-3bf8c6762e17-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.357557 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca88f3bb-29c1-41ef-8355-f9c52d62438a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ca88f3bb-29c1-41ef-8355-f9c52d62438a" (UID: "ca88f3bb-29c1-41ef-8355-f9c52d62438a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.360815 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28443614-6612-4fc3-9043-782b2175ddb3-kube-api-access-865gr" (OuterVolumeSpecName: "kube-api-access-865gr") pod "28443614-6612-4fc3-9043-782b2175ddb3" (UID: "28443614-6612-4fc3-9043-782b2175ddb3"). InnerVolumeSpecName "kube-api-access-865gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.360994 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca88f3bb-29c1-41ef-8355-f9c52d62438a-kube-api-access-9gsh9" (OuterVolumeSpecName: "kube-api-access-9gsh9") pod "ca88f3bb-29c1-41ef-8355-f9c52d62438a" (UID: "ca88f3bb-29c1-41ef-8355-f9c52d62438a"). InnerVolumeSpecName "kube-api-access-9gsh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.411292 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8f5a-account-create-update-q5hlx" event={"ID":"ca88f3bb-29c1-41ef-8355-f9c52d62438a","Type":"ContainerDied","Data":"34b1bac7114bc6b64705efcaf7a520c43929be3e0ceccc1d8c0c010ff03b92a5"} Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.411354 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34b1bac7114bc6b64705efcaf7a520c43929be3e0ceccc1d8c0c010ff03b92a5" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.411319 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8f5a-account-create-update-q5hlx" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.412883 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8vhcb" event={"ID":"882ac903-0c5a-48c6-977d-645792363692","Type":"ContainerDied","Data":"a429aaff0adc9dc09974d208064769742fb47e117fcea192a223b2031e45f9be"} Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.412908 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a429aaff0adc9dc09974d208064769742fb47e117fcea192a223b2031e45f9be" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.412959 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8vhcb" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.415868 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-eeda-account-create-update-4g7sc" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.415875 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-eeda-account-create-update-4g7sc" event={"ID":"0650495a-8169-4d67-b016-f52cb76911b8","Type":"ContainerDied","Data":"b1286e52cf71198aa18caec27a4d0772b7c2028fdd24de91e91fef54aba058ae"} Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.415933 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1286e52cf71198aa18caec27a4d0772b7c2028fdd24de91e91fef54aba058ae" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.432894 4904 generic.go:334] "Generic (PLEG): container finished" podID="d2d1c60a-e612-442b-9c87-28262f0fcde6" containerID="d87c51e9b54e2073525ab467a81175d147b5890affcb3be45042e11bdcdf9b9a" exitCode=0 Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.432928 4904 generic.go:334] "Generic (PLEG): container finished" podID="d2d1c60a-e612-442b-9c87-28262f0fcde6" containerID="dd2f5440f82603511f11622af292f0ed0d1a0833269abd220b84be0b3771841e" exitCode=2 Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.432935 4904 generic.go:334] "Generic (PLEG): container finished" podID="d2d1c60a-e612-442b-9c87-28262f0fcde6" containerID="964e6f5458863f11647447362d607d4fb4a15dda069cbd90a2e7693f5c49b51f" exitCode=0 Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.433007 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2d1c60a-e612-442b-9c87-28262f0fcde6","Type":"ContainerDied","Data":"d87c51e9b54e2073525ab467a81175d147b5890affcb3be45042e11bdcdf9b9a"} Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.433071 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2d1c60a-e612-442b-9c87-28262f0fcde6","Type":"ContainerDied","Data":"dd2f5440f82603511f11622af292f0ed0d1a0833269abd220b84be0b3771841e"} Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.433094 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2d1c60a-e612-442b-9c87-28262f0fcde6","Type":"ContainerDied","Data":"964e6f5458863f11647447362d607d4fb4a15dda069cbd90a2e7693f5c49b51f"} Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.435355 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7bvr2" event={"ID":"28443614-6612-4fc3-9043-782b2175ddb3","Type":"ContainerDied","Data":"330dd4612cb3d2e2bca9c23fde9326c71bd163bc474e34c8bc010b1138a8f3a5"} Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.435394 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="330dd4612cb3d2e2bca9c23fde9326c71bd163bc474e34c8bc010b1138a8f3a5" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.435459 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7bvr2" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.438918 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ab80-account-create-update-j58px" event={"ID":"4c368d56-bebc-433e-8519-9b8ab1ef51a4","Type":"ContainerDied","Data":"7afc19f4f30ae6cbfed349d50e646cce79c557e3810aba1220be2cae75ebe062"} Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.438963 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7afc19f4f30ae6cbfed349d50e646cce79c557e3810aba1220be2cae75ebe062" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.439022 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ab80-account-create-update-j58px" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.441921 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xgx5c" event={"ID":"49f85bef-7ffe-4784-8ba2-3bf8c6762e17","Type":"ContainerDied","Data":"9f30741bcc6b8228dcb938823dec25f2af5cf3bdb13bd2931e4af9558398f068"} Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.441963 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f30741bcc6b8228dcb938823dec25f2af5cf3bdb13bd2931e4af9558398f068" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.441978 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xgx5c" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.459450 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gsh9\" (UniqueName: \"kubernetes.io/projected/ca88f3bb-29c1-41ef-8355-f9c52d62438a-kube-api-access-9gsh9\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.459477 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca88f3bb-29c1-41ef-8355-f9c52d62438a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:52 crc kubenswrapper[4904]: I0223 10:26:52.459488 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-865gr\" (UniqueName: \"kubernetes.io/projected/28443614-6612-4fc3-9043-782b2175ddb3-kube-api-access-865gr\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:53 crc kubenswrapper[4904]: E0223 10:26:53.391555 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="95c7cf082496dd9eb45b4f0cc9a60cfbf0718c5f4d719cc1184ff68acaa582d4" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 23 10:26:53 crc kubenswrapper[4904]: E0223 10:26:53.393262 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="95c7cf082496dd9eb45b4f0cc9a60cfbf0718c5f4d719cc1184ff68acaa582d4" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 23 10:26:53 crc kubenswrapper[4904]: E0223 10:26:53.395395 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="95c7cf082496dd9eb45b4f0cc9a60cfbf0718c5f4d719cc1184ff68acaa582d4" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 23 10:26:53 crc kubenswrapper[4904]: E0223 10:26:53.395492 4904 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-decision-engine-0" podUID="f53ac71c-9251-491f-8c96-da8a2b408b48" containerName="watcher-decision-engine" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.045021 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.045313 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.109363 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.127252 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.468774 4904 generic.go:334] "Generic (PLEG): container finished" podID="f53ac71c-9251-491f-8c96-da8a2b408b48" containerID="95c7cf082496dd9eb45b4f0cc9a60cfbf0718c5f4d719cc1184ff68acaa582d4" exitCode=0 Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.469075 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"f53ac71c-9251-491f-8c96-da8a2b408b48","Type":"ContainerDied","Data":"95c7cf082496dd9eb45b4f0cc9a60cfbf0718c5f4d719cc1184ff68acaa582d4"} Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.473875 4904 generic.go:334] "Generic (PLEG): container finished" podID="d2d1c60a-e612-442b-9c87-28262f0fcde6" containerID="5876f58320ba2fda9d5a3bde5a784d7eea607299efab8816380aac2c39de4f74" exitCode=0 Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.473955 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2d1c60a-e612-442b-9c87-28262f0fcde6","Type":"ContainerDied","Data":"5876f58320ba2fda9d5a3bde5a784d7eea607299efab8816380aac2c39de4f74"} Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.474010 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2d1c60a-e612-442b-9c87-28262f0fcde6","Type":"ContainerDied","Data":"12b809d2615d4538a963748bfbaa9f5c6fc2d61d9908957d5e91b084164af64b"} Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.474023 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12b809d2615d4538a963748bfbaa9f5c6fc2d61d9908957d5e91b084164af64b" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.474403 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.474444 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.515935 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.615242 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2d1c60a-e612-442b-9c87-28262f0fcde6-run-httpd\") pod \"d2d1c60a-e612-442b-9c87-28262f0fcde6\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.615334 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjkzr\" (UniqueName: \"kubernetes.io/projected/d2d1c60a-e612-442b-9c87-28262f0fcde6-kube-api-access-jjkzr\") pod \"d2d1c60a-e612-442b-9c87-28262f0fcde6\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.615506 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2d1c60a-e612-442b-9c87-28262f0fcde6-config-data\") pod \"d2d1c60a-e612-442b-9c87-28262f0fcde6\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.615571 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2d1c60a-e612-442b-9c87-28262f0fcde6-log-httpd\") pod \"d2d1c60a-e612-442b-9c87-28262f0fcde6\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.615666 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2d1c60a-e612-442b-9c87-28262f0fcde6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d2d1c60a-e612-442b-9c87-28262f0fcde6" (UID: "d2d1c60a-e612-442b-9c87-28262f0fcde6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.615685 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2d1c60a-e612-442b-9c87-28262f0fcde6-scripts\") pod \"d2d1c60a-e612-442b-9c87-28262f0fcde6\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.615913 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2d1c60a-e612-442b-9c87-28262f0fcde6-sg-core-conf-yaml\") pod \"d2d1c60a-e612-442b-9c87-28262f0fcde6\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.616103 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d1c60a-e612-442b-9c87-28262f0fcde6-combined-ca-bundle\") pod \"d2d1c60a-e612-442b-9c87-28262f0fcde6\" (UID: \"d2d1c60a-e612-442b-9c87-28262f0fcde6\") " Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.616156 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2d1c60a-e612-442b-9c87-28262f0fcde6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d2d1c60a-e612-442b-9c87-28262f0fcde6" (UID: "d2d1c60a-e612-442b-9c87-28262f0fcde6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.617285 4904 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2d1c60a-e612-442b-9c87-28262f0fcde6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.617313 4904 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2d1c60a-e612-442b-9c87-28262f0fcde6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.623340 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d1c60a-e612-442b-9c87-28262f0fcde6-kube-api-access-jjkzr" (OuterVolumeSpecName: "kube-api-access-jjkzr") pod "d2d1c60a-e612-442b-9c87-28262f0fcde6" (UID: "d2d1c60a-e612-442b-9c87-28262f0fcde6"). InnerVolumeSpecName "kube-api-access-jjkzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.625404 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d1c60a-e612-442b-9c87-28262f0fcde6-scripts" (OuterVolumeSpecName: "scripts") pod "d2d1c60a-e612-442b-9c87-28262f0fcde6" (UID: "d2d1c60a-e612-442b-9c87-28262f0fcde6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.720538 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2d1c60a-e612-442b-9c87-28262f0fcde6-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.720571 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjkzr\" (UniqueName: \"kubernetes.io/projected/d2d1c60a-e612-442b-9c87-28262f0fcde6-kube-api-access-jjkzr\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.772032 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d1c60a-e612-442b-9c87-28262f0fcde6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d2d1c60a-e612-442b-9c87-28262f0fcde6" (UID: "d2d1c60a-e612-442b-9c87-28262f0fcde6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.823592 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.823663 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.825900 4904 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2d1c60a-e612-442b-9c87-28262f0fcde6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.841593 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.843982 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d1c60a-e612-442b-9c87-28262f0fcde6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2d1c60a-e612-442b-9c87-28262f0fcde6" (UID: "d2d1c60a-e612-442b-9c87-28262f0fcde6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.930926 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d1c60a-e612-442b-9c87-28262f0fcde6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.941025 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 23 10:26:54 crc kubenswrapper[4904]: I0223 10:26:54.971044 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.015924 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d1c60a-e612-442b-9c87-28262f0fcde6-config-data" (OuterVolumeSpecName: "config-data") pod "d2d1c60a-e612-442b-9c87-28262f0fcde6" (UID: "d2d1c60a-e612-442b-9c87-28262f0fcde6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.033425 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8ccq\" (UniqueName: \"kubernetes.io/projected/f53ac71c-9251-491f-8c96-da8a2b408b48-kube-api-access-b8ccq\") pod \"f53ac71c-9251-491f-8c96-da8a2b408b48\" (UID: \"f53ac71c-9251-491f-8c96-da8a2b408b48\") " Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.033669 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f53ac71c-9251-491f-8c96-da8a2b408b48-combined-ca-bundle\") pod \"f53ac71c-9251-491f-8c96-da8a2b408b48\" (UID: \"f53ac71c-9251-491f-8c96-da8a2b408b48\") " Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.033863 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f53ac71c-9251-491f-8c96-da8a2b408b48-custom-prometheus-ca\") pod \"f53ac71c-9251-491f-8c96-da8a2b408b48\" (UID: \"f53ac71c-9251-491f-8c96-da8a2b408b48\") " Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.033949 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f53ac71c-9251-491f-8c96-da8a2b408b48-config-data\") pod \"f53ac71c-9251-491f-8c96-da8a2b408b48\" (UID: \"f53ac71c-9251-491f-8c96-da8a2b408b48\") " Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.034335 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f53ac71c-9251-491f-8c96-da8a2b408b48-logs\") pod \"f53ac71c-9251-491f-8c96-da8a2b408b48\" (UID: \"f53ac71c-9251-491f-8c96-da8a2b408b48\") " Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.035482 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2d1c60a-e612-442b-9c87-28262f0fcde6-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.036525 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f53ac71c-9251-491f-8c96-da8a2b408b48-kube-api-access-b8ccq" (OuterVolumeSpecName: "kube-api-access-b8ccq") pod "f53ac71c-9251-491f-8c96-da8a2b408b48" (UID: "f53ac71c-9251-491f-8c96-da8a2b408b48"). InnerVolumeSpecName "kube-api-access-b8ccq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.036797 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f53ac71c-9251-491f-8c96-da8a2b408b48-logs" (OuterVolumeSpecName: "logs") pod "f53ac71c-9251-491f-8c96-da8a2b408b48" (UID: "f53ac71c-9251-491f-8c96-da8a2b408b48"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.062939 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f53ac71c-9251-491f-8c96-da8a2b408b48-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "f53ac71c-9251-491f-8c96-da8a2b408b48" (UID: "f53ac71c-9251-491f-8c96-da8a2b408b48"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.066022 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f53ac71c-9251-491f-8c96-da8a2b408b48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f53ac71c-9251-491f-8c96-da8a2b408b48" (UID: "f53ac71c-9251-491f-8c96-da8a2b408b48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.105896 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f53ac71c-9251-491f-8c96-da8a2b408b48-config-data" (OuterVolumeSpecName: "config-data") pod "f53ac71c-9251-491f-8c96-da8a2b408b48" (UID: "f53ac71c-9251-491f-8c96-da8a2b408b48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.137671 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f53ac71c-9251-491f-8c96-da8a2b408b48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.137715 4904 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f53ac71c-9251-491f-8c96-da8a2b408b48-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.137745 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f53ac71c-9251-491f-8c96-da8a2b408b48-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.137755 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f53ac71c-9251-491f-8c96-da8a2b408b48-logs\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.137769 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8ccq\" (UniqueName: \"kubernetes.io/projected/f53ac71c-9251-491f-8c96-da8a2b408b48-kube-api-access-b8ccq\") on node \"crc\" DevicePath \"\"" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.488427 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.488928 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"f53ac71c-9251-491f-8c96-da8a2b408b48","Type":"ContainerDied","Data":"f6b16e72482b41fdc3c8c23c9715b57d9e0bfcf7aa9e90a5694a3b7f6c78d487"} Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.488968 4904 scope.go:117] "RemoveContainer" containerID="95c7cf082496dd9eb45b4f0cc9a60cfbf0718c5f4d719cc1184ff68acaa582d4" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.489089 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.490298 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.490351 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.521818 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.534401 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.547801 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.560074 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.574057 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:26:55 crc kubenswrapper[4904]: E0223 10:26:55.574619 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="882ac903-0c5a-48c6-977d-645792363692" containerName="mariadb-database-create" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.574636 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="882ac903-0c5a-48c6-977d-645792363692" containerName="mariadb-database-create" Feb 23 10:26:55 crc kubenswrapper[4904]: E0223 10:26:55.574655 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0650495a-8169-4d67-b016-f52cb76911b8" containerName="mariadb-account-create-update" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.574662 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0650495a-8169-4d67-b016-f52cb76911b8" containerName="mariadb-account-create-update" Feb 23 10:26:55 crc kubenswrapper[4904]: E0223 10:26:55.574677 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c368d56-bebc-433e-8519-9b8ab1ef51a4" containerName="mariadb-account-create-update" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.574683 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c368d56-bebc-433e-8519-9b8ab1ef51a4" containerName="mariadb-account-create-update" Feb 23 10:26:55 crc kubenswrapper[4904]: E0223 10:26:55.574694 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f53ac71c-9251-491f-8c96-da8a2b408b48" containerName="watcher-decision-engine" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.574702 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f53ac71c-9251-491f-8c96-da8a2b408b48" containerName="watcher-decision-engine" Feb 23 10:26:55 crc kubenswrapper[4904]: E0223 10:26:55.574713 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f85bef-7ffe-4784-8ba2-3bf8c6762e17" containerName="mariadb-database-create" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.574734 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f85bef-7ffe-4784-8ba2-3bf8c6762e17" containerName="mariadb-database-create" Feb 23 10:26:55 crc kubenswrapper[4904]: E0223 10:26:55.574743 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1f8a283-bc3b-4cd4-ab91-244942a44e58" containerName="placement-api" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.574749 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1f8a283-bc3b-4cd4-ab91-244942a44e58" containerName="placement-api" Feb 23 10:26:55 crc kubenswrapper[4904]: E0223 10:26:55.574763 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1f8a283-bc3b-4cd4-ab91-244942a44e58" containerName="placement-log" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.574770 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1f8a283-bc3b-4cd4-ab91-244942a44e58" containerName="placement-log" Feb 23 10:26:55 crc kubenswrapper[4904]: E0223 10:26:55.574787 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28443614-6612-4fc3-9043-782b2175ddb3" containerName="mariadb-database-create" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.574793 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="28443614-6612-4fc3-9043-782b2175ddb3" containerName="mariadb-database-create" Feb 23 10:26:55 crc kubenswrapper[4904]: E0223 10:26:55.574806 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c88a53-6a67-457c-9cce-5fd72203ca30" containerName="horizon" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.574813 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c88a53-6a67-457c-9cce-5fd72203ca30" containerName="horizon" Feb 23 10:26:55 crc kubenswrapper[4904]: E0223 10:26:55.574826 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca88f3bb-29c1-41ef-8355-f9c52d62438a" containerName="mariadb-account-create-update" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.574833 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca88f3bb-29c1-41ef-8355-f9c52d62438a" containerName="mariadb-account-create-update" Feb 23 10:26:55 crc kubenswrapper[4904]: E0223 10:26:55.574847 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d1c60a-e612-442b-9c87-28262f0fcde6" containerName="proxy-httpd" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.574853 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d1c60a-e612-442b-9c87-28262f0fcde6" containerName="proxy-httpd" Feb 23 10:26:55 crc kubenswrapper[4904]: E0223 10:26:55.574868 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d1c60a-e612-442b-9c87-28262f0fcde6" containerName="ceilometer-notification-agent" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.574874 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d1c60a-e612-442b-9c87-28262f0fcde6" containerName="ceilometer-notification-agent" Feb 23 10:26:55 crc kubenswrapper[4904]: E0223 10:26:55.574885 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c88a53-6a67-457c-9cce-5fd72203ca30" containerName="horizon-log" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.574890 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c88a53-6a67-457c-9cce-5fd72203ca30" containerName="horizon-log" Feb 23 10:26:55 crc kubenswrapper[4904]: E0223 10:26:55.574899 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d1c60a-e612-442b-9c87-28262f0fcde6" containerName="sg-core" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.574905 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d1c60a-e612-442b-9c87-28262f0fcde6" containerName="sg-core" Feb 23 10:26:55 crc kubenswrapper[4904]: E0223 10:26:55.574920 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d1c60a-e612-442b-9c87-28262f0fcde6" containerName="ceilometer-central-agent" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.574926 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d1c60a-e612-442b-9c87-28262f0fcde6" containerName="ceilometer-central-agent" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.575125 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2d1c60a-e612-442b-9c87-28262f0fcde6" containerName="ceilometer-central-agent" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.575140 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2d1c60a-e612-442b-9c87-28262f0fcde6" containerName="sg-core" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.575152 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="16c88a53-6a67-457c-9cce-5fd72203ca30" containerName="horizon-log" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.575171 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca88f3bb-29c1-41ef-8355-f9c52d62438a" containerName="mariadb-account-create-update" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.575181 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="882ac903-0c5a-48c6-977d-645792363692" containerName="mariadb-database-create" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.575189 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f53ac71c-9251-491f-8c96-da8a2b408b48" containerName="watcher-decision-engine" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.575196 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1f8a283-bc3b-4cd4-ab91-244942a44e58" containerName="placement-api" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.575205 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2d1c60a-e612-442b-9c87-28262f0fcde6" containerName="ceilometer-notification-agent" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.575213 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c368d56-bebc-433e-8519-9b8ab1ef51a4" containerName="mariadb-account-create-update" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.575222 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2d1c60a-e612-442b-9c87-28262f0fcde6" containerName="proxy-httpd" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.575232 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0650495a-8169-4d67-b016-f52cb76911b8" containerName="mariadb-account-create-update" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.575242 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="16c88a53-6a67-457c-9cce-5fd72203ca30" containerName="horizon" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.575254 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="28443614-6612-4fc3-9043-782b2175ddb3" containerName="mariadb-database-create" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.575261 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1f8a283-bc3b-4cd4-ab91-244942a44e58" containerName="placement-log" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.575268 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="49f85bef-7ffe-4784-8ba2-3bf8c6762e17" containerName="mariadb-database-create" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.577357 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.587209 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.587344 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.595313 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.612827 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.614973 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.634213 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.658287 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.751758 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw55f\" (UniqueName: \"kubernetes.io/projected/670d406b-5d55-476b-b8a9-afc89a30219e-kube-api-access-nw55f\") pod \"ceilometer-0\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " pod="openstack/ceilometer-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.751815 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54c70650-ca5e-4eaf-92af-2704a01edf49-logs\") pod \"watcher-decision-engine-0\" (UID: \"54c70650-ca5e-4eaf-92af-2704a01edf49\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.751855 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/670d406b-5d55-476b-b8a9-afc89a30219e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " pod="openstack/ceilometer-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.751938 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/670d406b-5d55-476b-b8a9-afc89a30219e-scripts\") pod \"ceilometer-0\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " pod="openstack/ceilometer-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.752008 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/670d406b-5d55-476b-b8a9-afc89a30219e-log-httpd\") pod \"ceilometer-0\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " pod="openstack/ceilometer-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.752054 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/670d406b-5d55-476b-b8a9-afc89a30219e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " pod="openstack/ceilometer-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.752080 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/670d406b-5d55-476b-b8a9-afc89a30219e-config-data\") pod \"ceilometer-0\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " pod="openstack/ceilometer-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.752144 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54c70650-ca5e-4eaf-92af-2704a01edf49-config-data\") pod \"watcher-decision-engine-0\" (UID: \"54c70650-ca5e-4eaf-92af-2704a01edf49\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.752217 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/54c70650-ca5e-4eaf-92af-2704a01edf49-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"54c70650-ca5e-4eaf-92af-2704a01edf49\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.752257 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/670d406b-5d55-476b-b8a9-afc89a30219e-run-httpd\") pod \"ceilometer-0\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " pod="openstack/ceilometer-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.752304 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbxxw\" (UniqueName: \"kubernetes.io/projected/54c70650-ca5e-4eaf-92af-2704a01edf49-kube-api-access-sbxxw\") pod \"watcher-decision-engine-0\" (UID: \"54c70650-ca5e-4eaf-92af-2704a01edf49\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.752339 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54c70650-ca5e-4eaf-92af-2704a01edf49-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"54c70650-ca5e-4eaf-92af-2704a01edf49\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.855116 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw55f\" (UniqueName: \"kubernetes.io/projected/670d406b-5d55-476b-b8a9-afc89a30219e-kube-api-access-nw55f\") pod \"ceilometer-0\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " pod="openstack/ceilometer-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.855176 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54c70650-ca5e-4eaf-92af-2704a01edf49-logs\") pod \"watcher-decision-engine-0\" (UID: \"54c70650-ca5e-4eaf-92af-2704a01edf49\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.855240 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/670d406b-5d55-476b-b8a9-afc89a30219e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " pod="openstack/ceilometer-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.855283 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/670d406b-5d55-476b-b8a9-afc89a30219e-scripts\") pod \"ceilometer-0\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " pod="openstack/ceilometer-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.855323 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/670d406b-5d55-476b-b8a9-afc89a30219e-log-httpd\") pod \"ceilometer-0\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " pod="openstack/ceilometer-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.855344 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/670d406b-5d55-476b-b8a9-afc89a30219e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " pod="openstack/ceilometer-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.855386 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/670d406b-5d55-476b-b8a9-afc89a30219e-config-data\") pod \"ceilometer-0\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " pod="openstack/ceilometer-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.855454 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54c70650-ca5e-4eaf-92af-2704a01edf49-config-data\") pod \"watcher-decision-engine-0\" (UID: \"54c70650-ca5e-4eaf-92af-2704a01edf49\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.855538 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/54c70650-ca5e-4eaf-92af-2704a01edf49-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"54c70650-ca5e-4eaf-92af-2704a01edf49\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.855561 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/670d406b-5d55-476b-b8a9-afc89a30219e-run-httpd\") pod \"ceilometer-0\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " pod="openstack/ceilometer-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.855600 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbxxw\" (UniqueName: \"kubernetes.io/projected/54c70650-ca5e-4eaf-92af-2704a01edf49-kube-api-access-sbxxw\") pod \"watcher-decision-engine-0\" (UID: \"54c70650-ca5e-4eaf-92af-2704a01edf49\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.855651 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54c70650-ca5e-4eaf-92af-2704a01edf49-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"54c70650-ca5e-4eaf-92af-2704a01edf49\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.856164 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/670d406b-5d55-476b-b8a9-afc89a30219e-log-httpd\") pod \"ceilometer-0\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " pod="openstack/ceilometer-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.856511 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/670d406b-5d55-476b-b8a9-afc89a30219e-run-httpd\") pod \"ceilometer-0\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " pod="openstack/ceilometer-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.857009 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54c70650-ca5e-4eaf-92af-2704a01edf49-logs\") pod \"watcher-decision-engine-0\" (UID: \"54c70650-ca5e-4eaf-92af-2704a01edf49\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.861089 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/670d406b-5d55-476b-b8a9-afc89a30219e-scripts\") pod \"ceilometer-0\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " pod="openstack/ceilometer-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.862184 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54c70650-ca5e-4eaf-92af-2704a01edf49-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"54c70650-ca5e-4eaf-92af-2704a01edf49\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.862510 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54c70650-ca5e-4eaf-92af-2704a01edf49-config-data\") pod \"watcher-decision-engine-0\" (UID: \"54c70650-ca5e-4eaf-92af-2704a01edf49\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.862771 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/670d406b-5d55-476b-b8a9-afc89a30219e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " pod="openstack/ceilometer-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.863562 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/54c70650-ca5e-4eaf-92af-2704a01edf49-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"54c70650-ca5e-4eaf-92af-2704a01edf49\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.867526 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/670d406b-5d55-476b-b8a9-afc89a30219e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " pod="openstack/ceilometer-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.875163 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/670d406b-5d55-476b-b8a9-afc89a30219e-config-data\") pod \"ceilometer-0\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " pod="openstack/ceilometer-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.875291 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbxxw\" (UniqueName: \"kubernetes.io/projected/54c70650-ca5e-4eaf-92af-2704a01edf49-kube-api-access-sbxxw\") pod \"watcher-decision-engine-0\" (UID: \"54c70650-ca5e-4eaf-92af-2704a01edf49\") " pod="openstack/watcher-decision-engine-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.881629 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw55f\" (UniqueName: \"kubernetes.io/projected/670d406b-5d55-476b-b8a9-afc89a30219e-kube-api-access-nw55f\") pod \"ceilometer-0\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " pod="openstack/ceilometer-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.909088 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:26:55 crc kubenswrapper[4904]: I0223 10:26:55.939918 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 23 10:26:56 crc kubenswrapper[4904]: I0223 10:26:56.338896 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:26:56 crc kubenswrapper[4904]: I0223 10:26:56.464237 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 23 10:26:56 crc kubenswrapper[4904]: W0223 10:26:56.465164 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54c70650_ca5e_4eaf_92af_2704a01edf49.slice/crio-41fd48f58ce4c228b4e09cf96d8f2362cff69e7e06f704606d5497ddf7bd9736 WatchSource:0}: Error finding container 41fd48f58ce4c228b4e09cf96d8f2362cff69e7e06f704606d5497ddf7bd9736: Status 404 returned error can't find the container with id 41fd48f58ce4c228b4e09cf96d8f2362cff69e7e06f704606d5497ddf7bd9736 Feb 23 10:26:56 crc kubenswrapper[4904]: I0223 10:26:56.498610 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"54c70650-ca5e-4eaf-92af-2704a01edf49","Type":"ContainerStarted","Data":"41fd48f58ce4c228b4e09cf96d8f2362cff69e7e06f704606d5497ddf7bd9736"} Feb 23 10:26:56 crc kubenswrapper[4904]: I0223 10:26:56.610397 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:26:56 crc kubenswrapper[4904]: W0223 10:26:56.612940 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod670d406b_5d55_476b_b8a9_afc89a30219e.slice/crio-a36dc991b0987f3af28ddde31f98240e67df5bad800653be4cd08fd6f11cdfcb WatchSource:0}: Error finding container a36dc991b0987f3af28ddde31f98240e67df5bad800653be4cd08fd6f11cdfcb: Status 404 returned error can't find the container with id a36dc991b0987f3af28ddde31f98240e67df5bad800653be4cd08fd6f11cdfcb Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.276939 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2d1c60a-e612-442b-9c87-28262f0fcde6" path="/var/lib/kubelet/pods/d2d1c60a-e612-442b-9c87-28262f0fcde6/volumes" Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.280478 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f53ac71c-9251-491f-8c96-da8a2b408b48" path="/var/lib/kubelet/pods/f53ac71c-9251-491f-8c96-da8a2b408b48/volumes" Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.365986 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.366502 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.370911 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.546674 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"670d406b-5d55-476b-b8a9-afc89a30219e","Type":"ContainerStarted","Data":"4ba716a5c2914840e8c52237c75abe9b3cc615bc283bc54dae325338b8f2cfeb"} Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.546746 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"670d406b-5d55-476b-b8a9-afc89a30219e","Type":"ContainerStarted","Data":"a36dc991b0987f3af28ddde31f98240e67df5bad800653be4cd08fd6f11cdfcb"} Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.560336 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"54c70650-ca5e-4eaf-92af-2704a01edf49","Type":"ContainerStarted","Data":"1b710398afdcf97983693a4747266376295165de7b54163e7979043c3737f4cb"} Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.587784 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.587764557 podStartE2EDuration="2.587764557s" podCreationTimestamp="2026-02-23 10:26:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:26:57.586781129 +0000 UTC m=+1251.007154642" watchObservedRunningTime="2026-02-23 10:26:57.587764557 +0000 UTC m=+1251.008138070" Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.679844 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-thqwm"] Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.681415 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-thqwm" Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.689036 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-sj5pq" Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.689328 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.693334 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.707918 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-thqwm"] Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.723076 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12d42a58-0412-45e7-85a3-99a0a16346bc-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-thqwm\" (UID: \"12d42a58-0412-45e7-85a3-99a0a16346bc\") " pod="openstack/nova-cell0-conductor-db-sync-thqwm" Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.723150 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg45p\" (UniqueName: \"kubernetes.io/projected/12d42a58-0412-45e7-85a3-99a0a16346bc-kube-api-access-sg45p\") pod \"nova-cell0-conductor-db-sync-thqwm\" (UID: \"12d42a58-0412-45e7-85a3-99a0a16346bc\") " pod="openstack/nova-cell0-conductor-db-sync-thqwm" Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.723198 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12d42a58-0412-45e7-85a3-99a0a16346bc-scripts\") pod \"nova-cell0-conductor-db-sync-thqwm\" (UID: \"12d42a58-0412-45e7-85a3-99a0a16346bc\") " pod="openstack/nova-cell0-conductor-db-sync-thqwm" Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.723533 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12d42a58-0412-45e7-85a3-99a0a16346bc-config-data\") pod \"nova-cell0-conductor-db-sync-thqwm\" (UID: \"12d42a58-0412-45e7-85a3-99a0a16346bc\") " pod="openstack/nova-cell0-conductor-db-sync-thqwm" Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.825546 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12d42a58-0412-45e7-85a3-99a0a16346bc-config-data\") pod \"nova-cell0-conductor-db-sync-thqwm\" (UID: \"12d42a58-0412-45e7-85a3-99a0a16346bc\") " pod="openstack/nova-cell0-conductor-db-sync-thqwm" Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.825688 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12d42a58-0412-45e7-85a3-99a0a16346bc-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-thqwm\" (UID: \"12d42a58-0412-45e7-85a3-99a0a16346bc\") " pod="openstack/nova-cell0-conductor-db-sync-thqwm" Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.825716 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg45p\" (UniqueName: \"kubernetes.io/projected/12d42a58-0412-45e7-85a3-99a0a16346bc-kube-api-access-sg45p\") pod \"nova-cell0-conductor-db-sync-thqwm\" (UID: \"12d42a58-0412-45e7-85a3-99a0a16346bc\") " pod="openstack/nova-cell0-conductor-db-sync-thqwm" Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.825756 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12d42a58-0412-45e7-85a3-99a0a16346bc-scripts\") pod \"nova-cell0-conductor-db-sync-thqwm\" (UID: \"12d42a58-0412-45e7-85a3-99a0a16346bc\") " pod="openstack/nova-cell0-conductor-db-sync-thqwm" Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.833709 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12d42a58-0412-45e7-85a3-99a0a16346bc-scripts\") pod \"nova-cell0-conductor-db-sync-thqwm\" (UID: \"12d42a58-0412-45e7-85a3-99a0a16346bc\") " pod="openstack/nova-cell0-conductor-db-sync-thqwm" Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.834254 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12d42a58-0412-45e7-85a3-99a0a16346bc-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-thqwm\" (UID: \"12d42a58-0412-45e7-85a3-99a0a16346bc\") " pod="openstack/nova-cell0-conductor-db-sync-thqwm" Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.844439 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12d42a58-0412-45e7-85a3-99a0a16346bc-config-data\") pod \"nova-cell0-conductor-db-sync-thqwm\" (UID: \"12d42a58-0412-45e7-85a3-99a0a16346bc\") " pod="openstack/nova-cell0-conductor-db-sync-thqwm" Feb 23 10:26:57 crc kubenswrapper[4904]: I0223 10:26:57.865954 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg45p\" (UniqueName: \"kubernetes.io/projected/12d42a58-0412-45e7-85a3-99a0a16346bc-kube-api-access-sg45p\") pod \"nova-cell0-conductor-db-sync-thqwm\" (UID: \"12d42a58-0412-45e7-85a3-99a0a16346bc\") " pod="openstack/nova-cell0-conductor-db-sync-thqwm" Feb 23 10:26:58 crc kubenswrapper[4904]: I0223 10:26:58.003916 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-thqwm" Feb 23 10:26:58 crc kubenswrapper[4904]: I0223 10:26:58.497325 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 23 10:26:58 crc kubenswrapper[4904]: I0223 10:26:58.497832 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 10:26:58 crc kubenswrapper[4904]: I0223 10:26:58.502237 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 23 10:26:58 crc kubenswrapper[4904]: I0223 10:26:58.548745 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-thqwm"] Feb 23 10:26:58 crc kubenswrapper[4904]: I0223 10:26:58.574202 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-thqwm" event={"ID":"12d42a58-0412-45e7-85a3-99a0a16346bc","Type":"ContainerStarted","Data":"712a3a5a8203c1e07400ca6d4ad1408dd26dc9e539d6cb2f3afa827452c41112"} Feb 23 10:26:58 crc kubenswrapper[4904]: I0223 10:26:58.577215 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"670d406b-5d55-476b-b8a9-afc89a30219e","Type":"ContainerStarted","Data":"ba58694585c0325fdba58ea818bf3f5001c9641cbf33143a6865308944cb1e23"} Feb 23 10:26:59 crc kubenswrapper[4904]: I0223 10:26:59.621816 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"670d406b-5d55-476b-b8a9-afc89a30219e","Type":"ContainerStarted","Data":"ae87a4aeccb3cdb31bd1a9c78763e7363ccdb2bcd6e4e18d13cb13aecaf55ec2"} Feb 23 10:27:01 crc kubenswrapper[4904]: I0223 10:27:01.664300 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"670d406b-5d55-476b-b8a9-afc89a30219e","Type":"ContainerStarted","Data":"436e3e49b99405d98962a234bce04ed5f26cc825f40c6af0f782cf917739aa77"} Feb 23 10:27:01 crc kubenswrapper[4904]: I0223 10:27:01.665122 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="670d406b-5d55-476b-b8a9-afc89a30219e" containerName="ceilometer-central-agent" containerID="cri-o://4ba716a5c2914840e8c52237c75abe9b3cc615bc283bc54dae325338b8f2cfeb" gracePeriod=30 Feb 23 10:27:01 crc kubenswrapper[4904]: I0223 10:27:01.665525 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 10:27:01 crc kubenswrapper[4904]: I0223 10:27:01.666337 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="670d406b-5d55-476b-b8a9-afc89a30219e" containerName="proxy-httpd" containerID="cri-o://436e3e49b99405d98962a234bce04ed5f26cc825f40c6af0f782cf917739aa77" gracePeriod=30 Feb 23 10:27:01 crc kubenswrapper[4904]: I0223 10:27:01.666431 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="670d406b-5d55-476b-b8a9-afc89a30219e" containerName="sg-core" containerID="cri-o://ae87a4aeccb3cdb31bd1a9c78763e7363ccdb2bcd6e4e18d13cb13aecaf55ec2" gracePeriod=30 Feb 23 10:27:01 crc kubenswrapper[4904]: I0223 10:27:01.667025 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="670d406b-5d55-476b-b8a9-afc89a30219e" containerName="ceilometer-notification-agent" containerID="cri-o://ba58694585c0325fdba58ea818bf3f5001c9641cbf33143a6865308944cb1e23" gracePeriod=30 Feb 23 10:27:01 crc kubenswrapper[4904]: I0223 10:27:01.693394 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.419202365 podStartE2EDuration="6.693370456s" podCreationTimestamp="2026-02-23 10:26:55 +0000 UTC" firstStartedPulling="2026-02-23 10:26:56.629097551 +0000 UTC m=+1250.049471064" lastFinishedPulling="2026-02-23 10:27:00.903265642 +0000 UTC m=+1254.323639155" observedRunningTime="2026-02-23 10:27:01.686685246 +0000 UTC m=+1255.107058759" watchObservedRunningTime="2026-02-23 10:27:01.693370456 +0000 UTC m=+1255.113743969" Feb 23 10:27:02 crc kubenswrapper[4904]: I0223 10:27:02.678114 4904 generic.go:334] "Generic (PLEG): container finished" podID="670d406b-5d55-476b-b8a9-afc89a30219e" containerID="436e3e49b99405d98962a234bce04ed5f26cc825f40c6af0f782cf917739aa77" exitCode=0 Feb 23 10:27:02 crc kubenswrapper[4904]: I0223 10:27:02.678483 4904 generic.go:334] "Generic (PLEG): container finished" podID="670d406b-5d55-476b-b8a9-afc89a30219e" containerID="ae87a4aeccb3cdb31bd1a9c78763e7363ccdb2bcd6e4e18d13cb13aecaf55ec2" exitCode=2 Feb 23 10:27:02 crc kubenswrapper[4904]: I0223 10:27:02.678496 4904 generic.go:334] "Generic (PLEG): container finished" podID="670d406b-5d55-476b-b8a9-afc89a30219e" containerID="ba58694585c0325fdba58ea818bf3f5001c9641cbf33143a6865308944cb1e23" exitCode=0 Feb 23 10:27:02 crc kubenswrapper[4904]: I0223 10:27:02.678197 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"670d406b-5d55-476b-b8a9-afc89a30219e","Type":"ContainerDied","Data":"436e3e49b99405d98962a234bce04ed5f26cc825f40c6af0f782cf917739aa77"} Feb 23 10:27:02 crc kubenswrapper[4904]: I0223 10:27:02.678548 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"670d406b-5d55-476b-b8a9-afc89a30219e","Type":"ContainerDied","Data":"ae87a4aeccb3cdb31bd1a9c78763e7363ccdb2bcd6e4e18d13cb13aecaf55ec2"} Feb 23 10:27:02 crc kubenswrapper[4904]: I0223 10:27:02.678570 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"670d406b-5d55-476b-b8a9-afc89a30219e","Type":"ContainerDied","Data":"ba58694585c0325fdba58ea818bf3f5001c9641cbf33143a6865308944cb1e23"} Feb 23 10:27:05 crc kubenswrapper[4904]: I0223 10:27:05.940324 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 23 10:27:06 crc kubenswrapper[4904]: I0223 10:27:06.031993 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 23 10:27:06 crc kubenswrapper[4904]: I0223 10:27:06.740541 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 23 10:27:06 crc kubenswrapper[4904]: I0223 10:27:06.791074 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 23 10:27:08 crc kubenswrapper[4904]: I0223 10:27:08.761490 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-thqwm" event={"ID":"12d42a58-0412-45e7-85a3-99a0a16346bc","Type":"ContainerStarted","Data":"93f694d8cb626fde538d36935b090b1383eee549aadd2a290312700694cdfa7b"} Feb 23 10:27:08 crc kubenswrapper[4904]: I0223 10:27:08.791440 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-thqwm" podStartSLOduration=2.22004013 podStartE2EDuration="11.791420779s" podCreationTimestamp="2026-02-23 10:26:57 +0000 UTC" firstStartedPulling="2026-02-23 10:26:58.53615014 +0000 UTC m=+1251.956523653" lastFinishedPulling="2026-02-23 10:27:08.107530719 +0000 UTC m=+1261.527904302" observedRunningTime="2026-02-23 10:27:08.78022842 +0000 UTC m=+1262.200601943" watchObservedRunningTime="2026-02-23 10:27:08.791420779 +0000 UTC m=+1262.211794312" Feb 23 10:27:09 crc kubenswrapper[4904]: I0223 10:27:09.793937 4904 generic.go:334] "Generic (PLEG): container finished" podID="670d406b-5d55-476b-b8a9-afc89a30219e" containerID="4ba716a5c2914840e8c52237c75abe9b3cc615bc283bc54dae325338b8f2cfeb" exitCode=0 Feb 23 10:27:09 crc kubenswrapper[4904]: I0223 10:27:09.794026 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"670d406b-5d55-476b-b8a9-afc89a30219e","Type":"ContainerDied","Data":"4ba716a5c2914840e8c52237c75abe9b3cc615bc283bc54dae325338b8f2cfeb"} Feb 23 10:27:09 crc kubenswrapper[4904]: I0223 10:27:09.794409 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"670d406b-5d55-476b-b8a9-afc89a30219e","Type":"ContainerDied","Data":"a36dc991b0987f3af28ddde31f98240e67df5bad800653be4cd08fd6f11cdfcb"} Feb 23 10:27:09 crc kubenswrapper[4904]: I0223 10:27:09.794454 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a36dc991b0987f3af28ddde31f98240e67df5bad800653be4cd08fd6f11cdfcb" Feb 23 10:27:09 crc kubenswrapper[4904]: I0223 10:27:09.866264 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:27:09 crc kubenswrapper[4904]: I0223 10:27:09.917002 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/670d406b-5d55-476b-b8a9-afc89a30219e-log-httpd\") pod \"670d406b-5d55-476b-b8a9-afc89a30219e\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " Feb 23 10:27:09 crc kubenswrapper[4904]: I0223 10:27:09.917078 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/670d406b-5d55-476b-b8a9-afc89a30219e-scripts\") pod \"670d406b-5d55-476b-b8a9-afc89a30219e\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " Feb 23 10:27:09 crc kubenswrapper[4904]: I0223 10:27:09.917135 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/670d406b-5d55-476b-b8a9-afc89a30219e-combined-ca-bundle\") pod \"670d406b-5d55-476b-b8a9-afc89a30219e\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " Feb 23 10:27:09 crc kubenswrapper[4904]: I0223 10:27:09.917190 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/670d406b-5d55-476b-b8a9-afc89a30219e-config-data\") pod \"670d406b-5d55-476b-b8a9-afc89a30219e\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " Feb 23 10:27:09 crc kubenswrapper[4904]: I0223 10:27:09.917316 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/670d406b-5d55-476b-b8a9-afc89a30219e-run-httpd\") pod \"670d406b-5d55-476b-b8a9-afc89a30219e\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " Feb 23 10:27:09 crc kubenswrapper[4904]: I0223 10:27:09.917445 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw55f\" (UniqueName: \"kubernetes.io/projected/670d406b-5d55-476b-b8a9-afc89a30219e-kube-api-access-nw55f\") pod \"670d406b-5d55-476b-b8a9-afc89a30219e\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " Feb 23 10:27:09 crc kubenswrapper[4904]: I0223 10:27:09.917491 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/670d406b-5d55-476b-b8a9-afc89a30219e-sg-core-conf-yaml\") pod \"670d406b-5d55-476b-b8a9-afc89a30219e\" (UID: \"670d406b-5d55-476b-b8a9-afc89a30219e\") " Feb 23 10:27:09 crc kubenswrapper[4904]: I0223 10:27:09.917515 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/670d406b-5d55-476b-b8a9-afc89a30219e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "670d406b-5d55-476b-b8a9-afc89a30219e" (UID: "670d406b-5d55-476b-b8a9-afc89a30219e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:27:09 crc kubenswrapper[4904]: I0223 10:27:09.918337 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/670d406b-5d55-476b-b8a9-afc89a30219e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "670d406b-5d55-476b-b8a9-afc89a30219e" (UID: "670d406b-5d55-476b-b8a9-afc89a30219e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:27:09 crc kubenswrapper[4904]: I0223 10:27:09.920462 4904 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/670d406b-5d55-476b-b8a9-afc89a30219e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:09 crc kubenswrapper[4904]: I0223 10:27:09.920944 4904 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/670d406b-5d55-476b-b8a9-afc89a30219e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:09 crc kubenswrapper[4904]: I0223 10:27:09.941036 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/670d406b-5d55-476b-b8a9-afc89a30219e-kube-api-access-nw55f" (OuterVolumeSpecName: "kube-api-access-nw55f") pod "670d406b-5d55-476b-b8a9-afc89a30219e" (UID: "670d406b-5d55-476b-b8a9-afc89a30219e"). InnerVolumeSpecName "kube-api-access-nw55f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:27:09 crc kubenswrapper[4904]: I0223 10:27:09.942987 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/670d406b-5d55-476b-b8a9-afc89a30219e-scripts" (OuterVolumeSpecName: "scripts") pod "670d406b-5d55-476b-b8a9-afc89a30219e" (UID: "670d406b-5d55-476b-b8a9-afc89a30219e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:09 crc kubenswrapper[4904]: I0223 10:27:09.987975 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/670d406b-5d55-476b-b8a9-afc89a30219e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "670d406b-5d55-476b-b8a9-afc89a30219e" (UID: "670d406b-5d55-476b-b8a9-afc89a30219e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:10 crc kubenswrapper[4904]: I0223 10:27:10.024078 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw55f\" (UniqueName: \"kubernetes.io/projected/670d406b-5d55-476b-b8a9-afc89a30219e-kube-api-access-nw55f\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:10 crc kubenswrapper[4904]: I0223 10:27:10.031667 4904 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/670d406b-5d55-476b-b8a9-afc89a30219e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:10 crc kubenswrapper[4904]: I0223 10:27:10.031770 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/670d406b-5d55-476b-b8a9-afc89a30219e-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:10 crc kubenswrapper[4904]: I0223 10:27:10.048081 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/670d406b-5d55-476b-b8a9-afc89a30219e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "670d406b-5d55-476b-b8a9-afc89a30219e" (UID: "670d406b-5d55-476b-b8a9-afc89a30219e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:10 crc kubenswrapper[4904]: I0223 10:27:10.087688 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/670d406b-5d55-476b-b8a9-afc89a30219e-config-data" (OuterVolumeSpecName: "config-data") pod "670d406b-5d55-476b-b8a9-afc89a30219e" (UID: "670d406b-5d55-476b-b8a9-afc89a30219e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:10 crc kubenswrapper[4904]: I0223 10:27:10.133586 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/670d406b-5d55-476b-b8a9-afc89a30219e-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:10 crc kubenswrapper[4904]: I0223 10:27:10.133617 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/670d406b-5d55-476b-b8a9-afc89a30219e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:10 crc kubenswrapper[4904]: I0223 10:27:10.807801 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:27:10 crc kubenswrapper[4904]: I0223 10:27:10.865973 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:27:10 crc kubenswrapper[4904]: I0223 10:27:10.886432 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:27:10 crc kubenswrapper[4904]: I0223 10:27:10.900378 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:27:10 crc kubenswrapper[4904]: E0223 10:27:10.907098 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="670d406b-5d55-476b-b8a9-afc89a30219e" containerName="sg-core" Feb 23 10:27:10 crc kubenswrapper[4904]: I0223 10:27:10.907143 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="670d406b-5d55-476b-b8a9-afc89a30219e" containerName="sg-core" Feb 23 10:27:10 crc kubenswrapper[4904]: E0223 10:27:10.907233 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="670d406b-5d55-476b-b8a9-afc89a30219e" containerName="proxy-httpd" Feb 23 10:27:10 crc kubenswrapper[4904]: I0223 10:27:10.907241 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="670d406b-5d55-476b-b8a9-afc89a30219e" containerName="proxy-httpd" Feb 23 10:27:10 crc kubenswrapper[4904]: E0223 10:27:10.907261 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="670d406b-5d55-476b-b8a9-afc89a30219e" containerName="ceilometer-central-agent" Feb 23 10:27:10 crc kubenswrapper[4904]: I0223 10:27:10.907286 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="670d406b-5d55-476b-b8a9-afc89a30219e" containerName="ceilometer-central-agent" Feb 23 10:27:10 crc kubenswrapper[4904]: E0223 10:27:10.907309 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="670d406b-5d55-476b-b8a9-afc89a30219e" containerName="ceilometer-notification-agent" Feb 23 10:27:10 crc kubenswrapper[4904]: I0223 10:27:10.907316 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="670d406b-5d55-476b-b8a9-afc89a30219e" containerName="ceilometer-notification-agent" Feb 23 10:27:10 crc kubenswrapper[4904]: I0223 10:27:10.907818 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="670d406b-5d55-476b-b8a9-afc89a30219e" containerName="proxy-httpd" Feb 23 10:27:10 crc kubenswrapper[4904]: I0223 10:27:10.907837 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="670d406b-5d55-476b-b8a9-afc89a30219e" containerName="sg-core" Feb 23 10:27:10 crc kubenswrapper[4904]: I0223 10:27:10.907879 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="670d406b-5d55-476b-b8a9-afc89a30219e" containerName="ceilometer-central-agent" Feb 23 10:27:10 crc kubenswrapper[4904]: I0223 10:27:10.907900 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="670d406b-5d55-476b-b8a9-afc89a30219e" containerName="ceilometer-notification-agent" Feb 23 10:27:10 crc kubenswrapper[4904]: I0223 10:27:10.928534 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:27:10 crc kubenswrapper[4904]: I0223 10:27:10.928669 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:27:10 crc kubenswrapper[4904]: I0223 10:27:10.936828 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 10:27:10 crc kubenswrapper[4904]: I0223 10:27:10.939469 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.062088 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-run-httpd\") pod \"ceilometer-0\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " pod="openstack/ceilometer-0" Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.062148 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " pod="openstack/ceilometer-0" Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.062385 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-scripts\") pod \"ceilometer-0\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " pod="openstack/ceilometer-0" Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.062490 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-log-httpd\") pod \"ceilometer-0\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " pod="openstack/ceilometer-0" Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.062663 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bch6\" (UniqueName: \"kubernetes.io/projected/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-kube-api-access-7bch6\") pod \"ceilometer-0\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " pod="openstack/ceilometer-0" Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.062876 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " pod="openstack/ceilometer-0" Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.062979 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-config-data\") pod \"ceilometer-0\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " pod="openstack/ceilometer-0" Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.165015 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-run-httpd\") pod \"ceilometer-0\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " pod="openstack/ceilometer-0" Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.165069 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " pod="openstack/ceilometer-0" Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.165127 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-scripts\") pod \"ceilometer-0\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " pod="openstack/ceilometer-0" Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.165155 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-log-httpd\") pod \"ceilometer-0\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " pod="openstack/ceilometer-0" Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.165182 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bch6\" (UniqueName: \"kubernetes.io/projected/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-kube-api-access-7bch6\") pod \"ceilometer-0\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " pod="openstack/ceilometer-0" Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.165234 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " pod="openstack/ceilometer-0" Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.165282 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-config-data\") pod \"ceilometer-0\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " pod="openstack/ceilometer-0" Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.165674 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-run-httpd\") pod \"ceilometer-0\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " pod="openstack/ceilometer-0" Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.166299 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-log-httpd\") pod \"ceilometer-0\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " pod="openstack/ceilometer-0" Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.170621 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " pod="openstack/ceilometer-0" Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.171188 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " pod="openstack/ceilometer-0" Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.171850 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-config-data\") pod \"ceilometer-0\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " pod="openstack/ceilometer-0" Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.174021 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-scripts\") pod \"ceilometer-0\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " pod="openstack/ceilometer-0" Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.186521 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bch6\" (UniqueName: \"kubernetes.io/projected/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-kube-api-access-7bch6\") pod \"ceilometer-0\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " pod="openstack/ceilometer-0" Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.266501 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.270127 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="670d406b-5d55-476b-b8a9-afc89a30219e" path="/var/lib/kubelet/pods/670d406b-5d55-476b-b8a9-afc89a30219e/volumes" Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.757033 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:27:11 crc kubenswrapper[4904]: I0223 10:27:11.816865 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea5d8ea-4cf6-4b58-9203-44dd7916710e","Type":"ContainerStarted","Data":"92ceb762d64d717f96863530dfe89edaefff8fc846465da9e3fbcab8d826445a"} Feb 23 10:27:12 crc kubenswrapper[4904]: I0223 10:27:12.830212 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea5d8ea-4cf6-4b58-9203-44dd7916710e","Type":"ContainerStarted","Data":"e3ae86d5352287246cbf5492a8b487b3660a38a7ff3d0b8140b5b9afd22e4a6e"} Feb 23 10:27:12 crc kubenswrapper[4904]: I0223 10:27:12.890379 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:27:13 crc kubenswrapper[4904]: I0223 10:27:13.857397 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea5d8ea-4cf6-4b58-9203-44dd7916710e","Type":"ContainerStarted","Data":"f813e360ac8f96c887497a7bd194e767b41245c0043f526c72fcc849165d28ce"} Feb 23 10:27:14 crc kubenswrapper[4904]: I0223 10:27:14.870789 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea5d8ea-4cf6-4b58-9203-44dd7916710e","Type":"ContainerStarted","Data":"dfad4fcf842b5b63f73ce27c852ce6b5fff47dac76019802792048a202729d5c"} Feb 23 10:27:15 crc kubenswrapper[4904]: I0223 10:27:15.884797 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea5d8ea-4cf6-4b58-9203-44dd7916710e","Type":"ContainerStarted","Data":"3d5bcf495b5b48b476339b3f59e1ea1b67b5f85e9852cf8d8a2a03808b085e77"} Feb 23 10:27:15 crc kubenswrapper[4904]: I0223 10:27:15.885016 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ea5d8ea-4cf6-4b58-9203-44dd7916710e" containerName="ceilometer-central-agent" containerID="cri-o://e3ae86d5352287246cbf5492a8b487b3660a38a7ff3d0b8140b5b9afd22e4a6e" gracePeriod=30 Feb 23 10:27:15 crc kubenswrapper[4904]: I0223 10:27:15.885103 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ea5d8ea-4cf6-4b58-9203-44dd7916710e" containerName="sg-core" containerID="cri-o://dfad4fcf842b5b63f73ce27c852ce6b5fff47dac76019802792048a202729d5c" gracePeriod=30 Feb 23 10:27:15 crc kubenswrapper[4904]: I0223 10:27:15.885145 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 10:27:15 crc kubenswrapper[4904]: I0223 10:27:15.885155 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ea5d8ea-4cf6-4b58-9203-44dd7916710e" containerName="ceilometer-notification-agent" containerID="cri-o://f813e360ac8f96c887497a7bd194e767b41245c0043f526c72fcc849165d28ce" gracePeriod=30 Feb 23 10:27:15 crc kubenswrapper[4904]: I0223 10:27:15.885122 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ea5d8ea-4cf6-4b58-9203-44dd7916710e" containerName="proxy-httpd" containerID="cri-o://3d5bcf495b5b48b476339b3f59e1ea1b67b5f85e9852cf8d8a2a03808b085e77" gracePeriod=30 Feb 23 10:27:15 crc kubenswrapper[4904]: I0223 10:27:15.917178 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.30128711 podStartE2EDuration="5.91715627s" podCreationTimestamp="2026-02-23 10:27:10 +0000 UTC" firstStartedPulling="2026-02-23 10:27:11.754979369 +0000 UTC m=+1265.175352882" lastFinishedPulling="2026-02-23 10:27:15.370848519 +0000 UTC m=+1268.791222042" observedRunningTime="2026-02-23 10:27:15.912622551 +0000 UTC m=+1269.332996064" watchObservedRunningTime="2026-02-23 10:27:15.91715627 +0000 UTC m=+1269.337529783" Feb 23 10:27:16 crc kubenswrapper[4904]: I0223 10:27:16.897140 4904 generic.go:334] "Generic (PLEG): container finished" podID="1ea5d8ea-4cf6-4b58-9203-44dd7916710e" containerID="3d5bcf495b5b48b476339b3f59e1ea1b67b5f85e9852cf8d8a2a03808b085e77" exitCode=0 Feb 23 10:27:16 crc kubenswrapper[4904]: I0223 10:27:16.897188 4904 generic.go:334] "Generic (PLEG): container finished" podID="1ea5d8ea-4cf6-4b58-9203-44dd7916710e" containerID="dfad4fcf842b5b63f73ce27c852ce6b5fff47dac76019802792048a202729d5c" exitCode=2 Feb 23 10:27:16 crc kubenswrapper[4904]: I0223 10:27:16.897199 4904 generic.go:334] "Generic (PLEG): container finished" podID="1ea5d8ea-4cf6-4b58-9203-44dd7916710e" containerID="f813e360ac8f96c887497a7bd194e767b41245c0043f526c72fcc849165d28ce" exitCode=0 Feb 23 10:27:16 crc kubenswrapper[4904]: I0223 10:27:16.897200 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea5d8ea-4cf6-4b58-9203-44dd7916710e","Type":"ContainerDied","Data":"3d5bcf495b5b48b476339b3f59e1ea1b67b5f85e9852cf8d8a2a03808b085e77"} Feb 23 10:27:16 crc kubenswrapper[4904]: I0223 10:27:16.897255 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea5d8ea-4cf6-4b58-9203-44dd7916710e","Type":"ContainerDied","Data":"dfad4fcf842b5b63f73ce27c852ce6b5fff47dac76019802792048a202729d5c"} Feb 23 10:27:16 crc kubenswrapper[4904]: I0223 10:27:16.897268 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea5d8ea-4cf6-4b58-9203-44dd7916710e","Type":"ContainerDied","Data":"f813e360ac8f96c887497a7bd194e767b41245c0043f526c72fcc849165d28ce"} Feb 23 10:27:19 crc kubenswrapper[4904]: I0223 10:27:19.927757 4904 generic.go:334] "Generic (PLEG): container finished" podID="12d42a58-0412-45e7-85a3-99a0a16346bc" containerID="93f694d8cb626fde538d36935b090b1383eee549aadd2a290312700694cdfa7b" exitCode=0 Feb 23 10:27:19 crc kubenswrapper[4904]: I0223 10:27:19.927882 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-thqwm" event={"ID":"12d42a58-0412-45e7-85a3-99a0a16346bc","Type":"ContainerDied","Data":"93f694d8cb626fde538d36935b090b1383eee549aadd2a290312700694cdfa7b"} Feb 23 10:27:21 crc kubenswrapper[4904]: I0223 10:27:21.328992 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-thqwm" Feb 23 10:27:21 crc kubenswrapper[4904]: I0223 10:27:21.422645 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12d42a58-0412-45e7-85a3-99a0a16346bc-config-data\") pod \"12d42a58-0412-45e7-85a3-99a0a16346bc\" (UID: \"12d42a58-0412-45e7-85a3-99a0a16346bc\") " Feb 23 10:27:21 crc kubenswrapper[4904]: I0223 10:27:21.422709 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12d42a58-0412-45e7-85a3-99a0a16346bc-scripts\") pod \"12d42a58-0412-45e7-85a3-99a0a16346bc\" (UID: \"12d42a58-0412-45e7-85a3-99a0a16346bc\") " Feb 23 10:27:21 crc kubenswrapper[4904]: I0223 10:27:21.422845 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg45p\" (UniqueName: \"kubernetes.io/projected/12d42a58-0412-45e7-85a3-99a0a16346bc-kube-api-access-sg45p\") pod \"12d42a58-0412-45e7-85a3-99a0a16346bc\" (UID: \"12d42a58-0412-45e7-85a3-99a0a16346bc\") " Feb 23 10:27:21 crc kubenswrapper[4904]: I0223 10:27:21.422954 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12d42a58-0412-45e7-85a3-99a0a16346bc-combined-ca-bundle\") pod \"12d42a58-0412-45e7-85a3-99a0a16346bc\" (UID: \"12d42a58-0412-45e7-85a3-99a0a16346bc\") " Feb 23 10:27:21 crc kubenswrapper[4904]: I0223 10:27:21.432006 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12d42a58-0412-45e7-85a3-99a0a16346bc-scripts" (OuterVolumeSpecName: "scripts") pod "12d42a58-0412-45e7-85a3-99a0a16346bc" (UID: "12d42a58-0412-45e7-85a3-99a0a16346bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:21 crc kubenswrapper[4904]: I0223 10:27:21.459016 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12d42a58-0412-45e7-85a3-99a0a16346bc-kube-api-access-sg45p" (OuterVolumeSpecName: "kube-api-access-sg45p") pod "12d42a58-0412-45e7-85a3-99a0a16346bc" (UID: "12d42a58-0412-45e7-85a3-99a0a16346bc"). InnerVolumeSpecName "kube-api-access-sg45p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:27:21 crc kubenswrapper[4904]: I0223 10:27:21.459126 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12d42a58-0412-45e7-85a3-99a0a16346bc-config-data" (OuterVolumeSpecName: "config-data") pod "12d42a58-0412-45e7-85a3-99a0a16346bc" (UID: "12d42a58-0412-45e7-85a3-99a0a16346bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:21 crc kubenswrapper[4904]: I0223 10:27:21.459303 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12d42a58-0412-45e7-85a3-99a0a16346bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12d42a58-0412-45e7-85a3-99a0a16346bc" (UID: "12d42a58-0412-45e7-85a3-99a0a16346bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:21 crc kubenswrapper[4904]: I0223 10:27:21.525581 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg45p\" (UniqueName: \"kubernetes.io/projected/12d42a58-0412-45e7-85a3-99a0a16346bc-kube-api-access-sg45p\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:21 crc kubenswrapper[4904]: I0223 10:27:21.525617 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12d42a58-0412-45e7-85a3-99a0a16346bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:21 crc kubenswrapper[4904]: I0223 10:27:21.525630 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12d42a58-0412-45e7-85a3-99a0a16346bc-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:21 crc kubenswrapper[4904]: I0223 10:27:21.525644 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12d42a58-0412-45e7-85a3-99a0a16346bc-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:21 crc kubenswrapper[4904]: I0223 10:27:21.961229 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-thqwm" event={"ID":"12d42a58-0412-45e7-85a3-99a0a16346bc","Type":"ContainerDied","Data":"712a3a5a8203c1e07400ca6d4ad1408dd26dc9e539d6cb2f3afa827452c41112"} Feb 23 10:27:21 crc kubenswrapper[4904]: I0223 10:27:21.961292 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="712a3a5a8203c1e07400ca6d4ad1408dd26dc9e539d6cb2f3afa827452c41112" Feb 23 10:27:21 crc kubenswrapper[4904]: I0223 10:27:21.961376 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-thqwm" Feb 23 10:27:22 crc kubenswrapper[4904]: I0223 10:27:22.102766 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 23 10:27:22 crc kubenswrapper[4904]: E0223 10:27:22.104407 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12d42a58-0412-45e7-85a3-99a0a16346bc" containerName="nova-cell0-conductor-db-sync" Feb 23 10:27:22 crc kubenswrapper[4904]: I0223 10:27:22.104451 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="12d42a58-0412-45e7-85a3-99a0a16346bc" containerName="nova-cell0-conductor-db-sync" Feb 23 10:27:22 crc kubenswrapper[4904]: I0223 10:27:22.104896 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="12d42a58-0412-45e7-85a3-99a0a16346bc" containerName="nova-cell0-conductor-db-sync" Feb 23 10:27:22 crc kubenswrapper[4904]: I0223 10:27:22.106144 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 23 10:27:22 crc kubenswrapper[4904]: I0223 10:27:22.109563 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-sj5pq" Feb 23 10:27:22 crc kubenswrapper[4904]: I0223 10:27:22.113593 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 23 10:27:22 crc kubenswrapper[4904]: I0223 10:27:22.132200 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 23 10:27:22 crc kubenswrapper[4904]: I0223 10:27:22.241776 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/581bdca5-1a15-48a4-bbf5-941d419f276d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"581bdca5-1a15-48a4-bbf5-941d419f276d\") " pod="openstack/nova-cell0-conductor-0" Feb 23 10:27:22 crc kubenswrapper[4904]: I0223 10:27:22.241836 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/581bdca5-1a15-48a4-bbf5-941d419f276d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"581bdca5-1a15-48a4-bbf5-941d419f276d\") " pod="openstack/nova-cell0-conductor-0" Feb 23 10:27:22 crc kubenswrapper[4904]: I0223 10:27:22.241881 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsfzr\" (UniqueName: \"kubernetes.io/projected/581bdca5-1a15-48a4-bbf5-941d419f276d-kube-api-access-bsfzr\") pod \"nova-cell0-conductor-0\" (UID: \"581bdca5-1a15-48a4-bbf5-941d419f276d\") " pod="openstack/nova-cell0-conductor-0" Feb 23 10:27:22 crc kubenswrapper[4904]: I0223 10:27:22.344844 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/581bdca5-1a15-48a4-bbf5-941d419f276d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"581bdca5-1a15-48a4-bbf5-941d419f276d\") " pod="openstack/nova-cell0-conductor-0" Feb 23 10:27:22 crc kubenswrapper[4904]: I0223 10:27:22.345279 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsfzr\" (UniqueName: \"kubernetes.io/projected/581bdca5-1a15-48a4-bbf5-941d419f276d-kube-api-access-bsfzr\") pod \"nova-cell0-conductor-0\" (UID: \"581bdca5-1a15-48a4-bbf5-941d419f276d\") " pod="openstack/nova-cell0-conductor-0" Feb 23 10:27:22 crc kubenswrapper[4904]: I0223 10:27:22.345539 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/581bdca5-1a15-48a4-bbf5-941d419f276d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"581bdca5-1a15-48a4-bbf5-941d419f276d\") " pod="openstack/nova-cell0-conductor-0" Feb 23 10:27:22 crc kubenswrapper[4904]: I0223 10:27:22.352829 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/581bdca5-1a15-48a4-bbf5-941d419f276d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"581bdca5-1a15-48a4-bbf5-941d419f276d\") " pod="openstack/nova-cell0-conductor-0" Feb 23 10:27:22 crc kubenswrapper[4904]: I0223 10:27:22.355377 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/581bdca5-1a15-48a4-bbf5-941d419f276d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"581bdca5-1a15-48a4-bbf5-941d419f276d\") " pod="openstack/nova-cell0-conductor-0" Feb 23 10:27:22 crc kubenswrapper[4904]: I0223 10:27:22.362497 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsfzr\" (UniqueName: \"kubernetes.io/projected/581bdca5-1a15-48a4-bbf5-941d419f276d-kube-api-access-bsfzr\") pod \"nova-cell0-conductor-0\" (UID: \"581bdca5-1a15-48a4-bbf5-941d419f276d\") " pod="openstack/nova-cell0-conductor-0" Feb 23 10:27:22 crc kubenswrapper[4904]: I0223 10:27:22.424809 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 23 10:27:22 crc kubenswrapper[4904]: I0223 10:27:22.915649 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 23 10:27:22 crc kubenswrapper[4904]: I0223 10:27:22.983884 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"581bdca5-1a15-48a4-bbf5-941d419f276d","Type":"ContainerStarted","Data":"033250788cec1962c16ef70774c55776058ccce11764b5fff9b04f3f591ead54"} Feb 23 10:27:22 crc kubenswrapper[4904]: I0223 10:27:22.987134 4904 generic.go:334] "Generic (PLEG): container finished" podID="1ea5d8ea-4cf6-4b58-9203-44dd7916710e" containerID="e3ae86d5352287246cbf5492a8b487b3660a38a7ff3d0b8140b5b9afd22e4a6e" exitCode=0 Feb 23 10:27:22 crc kubenswrapper[4904]: I0223 10:27:22.987209 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea5d8ea-4cf6-4b58-9203-44dd7916710e","Type":"ContainerDied","Data":"e3ae86d5352287246cbf5492a8b487b3660a38a7ff3d0b8140b5b9afd22e4a6e"} Feb 23 10:27:23 crc kubenswrapper[4904]: I0223 10:27:23.118261 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:27:23 crc kubenswrapper[4904]: I0223 10:27:23.163412 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-run-httpd\") pod \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " Feb 23 10:27:23 crc kubenswrapper[4904]: I0223 10:27:23.163595 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-combined-ca-bundle\") pod \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " Feb 23 10:27:23 crc kubenswrapper[4904]: I0223 10:27:23.163754 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-log-httpd\") pod \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " Feb 23 10:27:23 crc kubenswrapper[4904]: I0223 10:27:23.163843 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-scripts\") pod \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " Feb 23 10:27:23 crc kubenswrapper[4904]: I0223 10:27:23.163896 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bch6\" (UniqueName: \"kubernetes.io/projected/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-kube-api-access-7bch6\") pod \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " Feb 23 10:27:23 crc kubenswrapper[4904]: I0223 10:27:23.163929 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-config-data\") pod \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " Feb 23 10:27:23 crc kubenswrapper[4904]: I0223 10:27:23.163942 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1ea5d8ea-4cf6-4b58-9203-44dd7916710e" (UID: "1ea5d8ea-4cf6-4b58-9203-44dd7916710e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:27:23 crc kubenswrapper[4904]: I0223 10:27:23.164031 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-sg-core-conf-yaml\") pod \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\" (UID: \"1ea5d8ea-4cf6-4b58-9203-44dd7916710e\") " Feb 23 10:27:23 crc kubenswrapper[4904]: I0223 10:27:23.164340 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1ea5d8ea-4cf6-4b58-9203-44dd7916710e" (UID: "1ea5d8ea-4cf6-4b58-9203-44dd7916710e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:27:23 crc kubenswrapper[4904]: I0223 10:27:23.164838 4904 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:23 crc kubenswrapper[4904]: I0223 10:27:23.164869 4904 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:23 crc kubenswrapper[4904]: I0223 10:27:23.169759 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-kube-api-access-7bch6" (OuterVolumeSpecName: "kube-api-access-7bch6") pod "1ea5d8ea-4cf6-4b58-9203-44dd7916710e" (UID: "1ea5d8ea-4cf6-4b58-9203-44dd7916710e"). InnerVolumeSpecName "kube-api-access-7bch6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:27:23 crc kubenswrapper[4904]: I0223 10:27:23.170343 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-scripts" (OuterVolumeSpecName: "scripts") pod "1ea5d8ea-4cf6-4b58-9203-44dd7916710e" (UID: "1ea5d8ea-4cf6-4b58-9203-44dd7916710e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:23 crc kubenswrapper[4904]: I0223 10:27:23.194919 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1ea5d8ea-4cf6-4b58-9203-44dd7916710e" (UID: "1ea5d8ea-4cf6-4b58-9203-44dd7916710e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:23 crc kubenswrapper[4904]: I0223 10:27:23.244912 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ea5d8ea-4cf6-4b58-9203-44dd7916710e" (UID: "1ea5d8ea-4cf6-4b58-9203-44dd7916710e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:23 crc kubenswrapper[4904]: I0223 10:27:23.267655 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:23 crc kubenswrapper[4904]: I0223 10:27:23.267985 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:23 crc kubenswrapper[4904]: I0223 10:27:23.267999 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bch6\" (UniqueName: \"kubernetes.io/projected/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-kube-api-access-7bch6\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:23 crc kubenswrapper[4904]: I0223 10:27:23.268014 4904 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:23 crc kubenswrapper[4904]: I0223 10:27:23.322560 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-config-data" (OuterVolumeSpecName: "config-data") pod "1ea5d8ea-4cf6-4b58-9203-44dd7916710e" (UID: "1ea5d8ea-4cf6-4b58-9203-44dd7916710e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:23 crc kubenswrapper[4904]: I0223 10:27:23.376710 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea5d8ea-4cf6-4b58-9203-44dd7916710e-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:23 crc kubenswrapper[4904]: I0223 10:27:23.999734 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"581bdca5-1a15-48a4-bbf5-941d419f276d","Type":"ContainerStarted","Data":"7f23354a352474c4dcdd503a126863c2c2b2a7e744e5c8151fa8c7e83f495be8"} Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:23.999862 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.003363 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea5d8ea-4cf6-4b58-9203-44dd7916710e","Type":"ContainerDied","Data":"92ceb762d64d717f96863530dfe89edaefff8fc846465da9e3fbcab8d826445a"} Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.003572 4904 scope.go:117] "RemoveContainer" containerID="3d5bcf495b5b48b476339b3f59e1ea1b67b5f85e9852cf8d8a2a03808b085e77" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.003508 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.044045 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.044018176 podStartE2EDuration="2.044018176s" podCreationTimestamp="2026-02-23 10:27:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:27:24.02765498 +0000 UTC m=+1277.448028503" watchObservedRunningTime="2026-02-23 10:27:24.044018176 +0000 UTC m=+1277.464391709" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.083639 4904 scope.go:117] "RemoveContainer" containerID="dfad4fcf842b5b63f73ce27c852ce6b5fff47dac76019802792048a202729d5c" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.094446 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.125996 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.138059 4904 scope.go:117] "RemoveContainer" containerID="f813e360ac8f96c887497a7bd194e767b41245c0043f526c72fcc849165d28ce" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.143227 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:27:24 crc kubenswrapper[4904]: E0223 10:27:24.143922 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea5d8ea-4cf6-4b58-9203-44dd7916710e" containerName="ceilometer-central-agent" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.143947 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea5d8ea-4cf6-4b58-9203-44dd7916710e" containerName="ceilometer-central-agent" Feb 23 10:27:24 crc kubenswrapper[4904]: E0223 10:27:24.143973 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea5d8ea-4cf6-4b58-9203-44dd7916710e" containerName="ceilometer-notification-agent" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.143983 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea5d8ea-4cf6-4b58-9203-44dd7916710e" containerName="ceilometer-notification-agent" Feb 23 10:27:24 crc kubenswrapper[4904]: E0223 10:27:24.144002 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea5d8ea-4cf6-4b58-9203-44dd7916710e" containerName="sg-core" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.144194 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea5d8ea-4cf6-4b58-9203-44dd7916710e" containerName="sg-core" Feb 23 10:27:24 crc kubenswrapper[4904]: E0223 10:27:24.144560 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea5d8ea-4cf6-4b58-9203-44dd7916710e" containerName="proxy-httpd" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.144636 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea5d8ea-4cf6-4b58-9203-44dd7916710e" containerName="proxy-httpd" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.145373 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea5d8ea-4cf6-4b58-9203-44dd7916710e" containerName="sg-core" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.145411 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea5d8ea-4cf6-4b58-9203-44dd7916710e" containerName="proxy-httpd" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.145428 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea5d8ea-4cf6-4b58-9203-44dd7916710e" containerName="ceilometer-notification-agent" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.145458 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea5d8ea-4cf6-4b58-9203-44dd7916710e" containerName="ceilometer-central-agent" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.147912 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.151055 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.151270 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.173260 4904 scope.go:117] "RemoveContainer" containerID="e3ae86d5352287246cbf5492a8b487b3660a38a7ff3d0b8140b5b9afd22e4a6e" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.176189 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.193046 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-config-data\") pod \"ceilometer-0\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " pod="openstack/ceilometer-0" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.193100 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " pod="openstack/ceilometer-0" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.193160 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhtrt\" (UniqueName: \"kubernetes.io/projected/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-kube-api-access-dhtrt\") pod \"ceilometer-0\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " pod="openstack/ceilometer-0" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.193192 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-scripts\") pod \"ceilometer-0\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " pod="openstack/ceilometer-0" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.193277 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-run-httpd\") pod \"ceilometer-0\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " pod="openstack/ceilometer-0" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.193317 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-log-httpd\") pod \"ceilometer-0\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " pod="openstack/ceilometer-0" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.193344 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " pod="openstack/ceilometer-0" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.295845 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-run-httpd\") pod \"ceilometer-0\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " pod="openstack/ceilometer-0" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.295940 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-log-httpd\") pod \"ceilometer-0\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " pod="openstack/ceilometer-0" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.295997 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " pod="openstack/ceilometer-0" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.296036 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-config-data\") pod \"ceilometer-0\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " pod="openstack/ceilometer-0" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.296098 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " pod="openstack/ceilometer-0" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.296257 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhtrt\" (UniqueName: \"kubernetes.io/projected/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-kube-api-access-dhtrt\") pod \"ceilometer-0\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " pod="openstack/ceilometer-0" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.296340 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-scripts\") pod \"ceilometer-0\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " pod="openstack/ceilometer-0" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.297510 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-log-httpd\") pod \"ceilometer-0\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " pod="openstack/ceilometer-0" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.298508 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-run-httpd\") pod \"ceilometer-0\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " pod="openstack/ceilometer-0" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.305028 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " pod="openstack/ceilometer-0" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.306191 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " pod="openstack/ceilometer-0" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.309555 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-scripts\") pod \"ceilometer-0\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " pod="openstack/ceilometer-0" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.320828 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-config-data\") pod \"ceilometer-0\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " pod="openstack/ceilometer-0" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.320881 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhtrt\" (UniqueName: \"kubernetes.io/projected/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-kube-api-access-dhtrt\") pod \"ceilometer-0\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " pod="openstack/ceilometer-0" Feb 23 10:27:24 crc kubenswrapper[4904]: I0223 10:27:24.468788 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:27:25 crc kubenswrapper[4904]: I0223 10:27:25.034972 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:27:25 crc kubenswrapper[4904]: W0223 10:27:25.047056 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbf26cb5_2c3e_4742_9595_ae7e8dad8af1.slice/crio-1496dda5dccdb5cb66cfce6902d462912d8d71a7da89ea86a17e61aef658300a WatchSource:0}: Error finding container 1496dda5dccdb5cb66cfce6902d462912d8d71a7da89ea86a17e61aef658300a: Status 404 returned error can't find the container with id 1496dda5dccdb5cb66cfce6902d462912d8d71a7da89ea86a17e61aef658300a Feb 23 10:27:25 crc kubenswrapper[4904]: I0223 10:27:25.265831 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ea5d8ea-4cf6-4b58-9203-44dd7916710e" path="/var/lib/kubelet/pods/1ea5d8ea-4cf6-4b58-9203-44dd7916710e/volumes" Feb 23 10:27:26 crc kubenswrapper[4904]: I0223 10:27:26.038269 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1","Type":"ContainerStarted","Data":"71f6d4c55d3baf1289b82951afde783697d7648be6db2ccf9ec589845f75e35a"} Feb 23 10:27:26 crc kubenswrapper[4904]: I0223 10:27:26.039012 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1","Type":"ContainerStarted","Data":"1496dda5dccdb5cb66cfce6902d462912d8d71a7da89ea86a17e61aef658300a"} Feb 23 10:27:27 crc kubenswrapper[4904]: I0223 10:27:27.049802 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1","Type":"ContainerStarted","Data":"4c9547b95d66000bb5d97a7132258c6fff1c569f0d36e87e78405d8c5e2c263d"} Feb 23 10:27:28 crc kubenswrapper[4904]: I0223 10:27:28.063549 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1","Type":"ContainerStarted","Data":"ef77e0f0612dfb96f815496ad96bb375ea0085d1f7889fd5eda6b600237fd300"} Feb 23 10:27:30 crc kubenswrapper[4904]: I0223 10:27:30.090607 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1","Type":"ContainerStarted","Data":"b428acc2154b376f24adab7a67f245f9805661e053a52c33db3e5e5dbe656b9c"} Feb 23 10:27:30 crc kubenswrapper[4904]: I0223 10:27:30.091272 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 10:27:30 crc kubenswrapper[4904]: I0223 10:27:30.115563 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.867001119 podStartE2EDuration="6.11553921s" podCreationTimestamp="2026-02-23 10:27:24 +0000 UTC" firstStartedPulling="2026-02-23 10:27:25.050295147 +0000 UTC m=+1278.470668660" lastFinishedPulling="2026-02-23 10:27:29.298833198 +0000 UTC m=+1282.719206751" observedRunningTime="2026-02-23 10:27:30.110059054 +0000 UTC m=+1283.530432567" watchObservedRunningTime="2026-02-23 10:27:30.11553921 +0000 UTC m=+1283.535912733" Feb 23 10:27:32 crc kubenswrapper[4904]: I0223 10:27:32.464555 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 23 10:27:32 crc kubenswrapper[4904]: I0223 10:27:32.978236 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-4cq2r"] Feb 23 10:27:32 crc kubenswrapper[4904]: I0223 10:27:32.980285 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4cq2r" Feb 23 10:27:32 crc kubenswrapper[4904]: I0223 10:27:32.984007 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 23 10:27:32 crc kubenswrapper[4904]: I0223 10:27:32.984078 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.007443 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4cq2r"] Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.014655 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1-config-data\") pod \"nova-cell0-cell-mapping-4cq2r\" (UID: \"3b8e681f-ebb7-4bdb-b592-2636fab0d8b1\") " pod="openstack/nova-cell0-cell-mapping-4cq2r" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.014949 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1-scripts\") pod \"nova-cell0-cell-mapping-4cq2r\" (UID: \"3b8e681f-ebb7-4bdb-b592-2636fab0d8b1\") " pod="openstack/nova-cell0-cell-mapping-4cq2r" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.015040 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2kdg\" (UniqueName: \"kubernetes.io/projected/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1-kube-api-access-c2kdg\") pod \"nova-cell0-cell-mapping-4cq2r\" (UID: \"3b8e681f-ebb7-4bdb-b592-2636fab0d8b1\") " pod="openstack/nova-cell0-cell-mapping-4cq2r" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.015075 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4cq2r\" (UID: \"3b8e681f-ebb7-4bdb-b592-2636fab0d8b1\") " pod="openstack/nova-cell0-cell-mapping-4cq2r" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.119583 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1-config-data\") pod \"nova-cell0-cell-mapping-4cq2r\" (UID: \"3b8e681f-ebb7-4bdb-b592-2636fab0d8b1\") " pod="openstack/nova-cell0-cell-mapping-4cq2r" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.120052 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1-scripts\") pod \"nova-cell0-cell-mapping-4cq2r\" (UID: \"3b8e681f-ebb7-4bdb-b592-2636fab0d8b1\") " pod="openstack/nova-cell0-cell-mapping-4cq2r" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.120168 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2kdg\" (UniqueName: \"kubernetes.io/projected/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1-kube-api-access-c2kdg\") pod \"nova-cell0-cell-mapping-4cq2r\" (UID: \"3b8e681f-ebb7-4bdb-b592-2636fab0d8b1\") " pod="openstack/nova-cell0-cell-mapping-4cq2r" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.120190 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4cq2r\" (UID: \"3b8e681f-ebb7-4bdb-b592-2636fab0d8b1\") " pod="openstack/nova-cell0-cell-mapping-4cq2r" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.139095 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4cq2r\" (UID: \"3b8e681f-ebb7-4bdb-b592-2636fab0d8b1\") " pod="openstack/nova-cell0-cell-mapping-4cq2r" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.151384 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1-scripts\") pod \"nova-cell0-cell-mapping-4cq2r\" (UID: \"3b8e681f-ebb7-4bdb-b592-2636fab0d8b1\") " pod="openstack/nova-cell0-cell-mapping-4cq2r" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.151589 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1-config-data\") pod \"nova-cell0-cell-mapping-4cq2r\" (UID: \"3b8e681f-ebb7-4bdb-b592-2636fab0d8b1\") " pod="openstack/nova-cell0-cell-mapping-4cq2r" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.188495 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2kdg\" (UniqueName: \"kubernetes.io/projected/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1-kube-api-access-c2kdg\") pod \"nova-cell0-cell-mapping-4cq2r\" (UID: \"3b8e681f-ebb7-4bdb-b592-2636fab0d8b1\") " pod="openstack/nova-cell0-cell-mapping-4cq2r" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.190829 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.193259 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.200573 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.227381 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.310835 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.312731 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4cq2r" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.312776 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.322054 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.331028 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/802e4259-81ea-4d24-86b8-f607bb9ed3b7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"802e4259-81ea-4d24-86b8-f607bb9ed3b7\") " pod="openstack/nova-api-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.331150 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrmhk\" (UniqueName: \"kubernetes.io/projected/802e4259-81ea-4d24-86b8-f607bb9ed3b7-kube-api-access-zrmhk\") pod \"nova-api-0\" (UID: \"802e4259-81ea-4d24-86b8-f607bb9ed3b7\") " pod="openstack/nova-api-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.331247 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/802e4259-81ea-4d24-86b8-f607bb9ed3b7-config-data\") pod \"nova-api-0\" (UID: \"802e4259-81ea-4d24-86b8-f607bb9ed3b7\") " pod="openstack/nova-api-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.331358 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/802e4259-81ea-4d24-86b8-f607bb9ed3b7-logs\") pod \"nova-api-0\" (UID: \"802e4259-81ea-4d24-86b8-f607bb9ed3b7\") " pod="openstack/nova-api-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.363946 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.384927 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.396021 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.413924 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.446993 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.448805 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjj5g\" (UniqueName: \"kubernetes.io/projected/bfad934d-b857-4a73-92b6-86495d75c8bd-kube-api-access-wjj5g\") pod \"nova-metadata-0\" (UID: \"bfad934d-b857-4a73-92b6-86495d75c8bd\") " pod="openstack/nova-metadata-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.448908 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4415fc6-01e5-449e-9c42-3e7e37d226bb-config-data\") pod \"nova-scheduler-0\" (UID: \"d4415fc6-01e5-449e-9c42-3e7e37d226bb\") " pod="openstack/nova-scheduler-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.448949 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9vd5\" (UniqueName: \"kubernetes.io/projected/d4415fc6-01e5-449e-9c42-3e7e37d226bb-kube-api-access-x9vd5\") pod \"nova-scheduler-0\" (UID: \"d4415fc6-01e5-449e-9c42-3e7e37d226bb\") " pod="openstack/nova-scheduler-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.448984 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfad934d-b857-4a73-92b6-86495d75c8bd-config-data\") pod \"nova-metadata-0\" (UID: \"bfad934d-b857-4a73-92b6-86495d75c8bd\") " pod="openstack/nova-metadata-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.449015 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4415fc6-01e5-449e-9c42-3e7e37d226bb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d4415fc6-01e5-449e-9c42-3e7e37d226bb\") " pod="openstack/nova-scheduler-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.449070 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/802e4259-81ea-4d24-86b8-f607bb9ed3b7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"802e4259-81ea-4d24-86b8-f607bb9ed3b7\") " pod="openstack/nova-api-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.449100 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfad934d-b857-4a73-92b6-86495d75c8bd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bfad934d-b857-4a73-92b6-86495d75c8bd\") " pod="openstack/nova-metadata-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.449120 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfad934d-b857-4a73-92b6-86495d75c8bd-logs\") pod \"nova-metadata-0\" (UID: \"bfad934d-b857-4a73-92b6-86495d75c8bd\") " pod="openstack/nova-metadata-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.449142 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrmhk\" (UniqueName: \"kubernetes.io/projected/802e4259-81ea-4d24-86b8-f607bb9ed3b7-kube-api-access-zrmhk\") pod \"nova-api-0\" (UID: \"802e4259-81ea-4d24-86b8-f607bb9ed3b7\") " pod="openstack/nova-api-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.449182 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/802e4259-81ea-4d24-86b8-f607bb9ed3b7-config-data\") pod \"nova-api-0\" (UID: \"802e4259-81ea-4d24-86b8-f607bb9ed3b7\") " pod="openstack/nova-api-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.449220 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/802e4259-81ea-4d24-86b8-f607bb9ed3b7-logs\") pod \"nova-api-0\" (UID: \"802e4259-81ea-4d24-86b8-f607bb9ed3b7\") " pod="openstack/nova-api-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.449755 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/802e4259-81ea-4d24-86b8-f607bb9ed3b7-logs\") pod \"nova-api-0\" (UID: \"802e4259-81ea-4d24-86b8-f607bb9ed3b7\") " pod="openstack/nova-api-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.457246 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/802e4259-81ea-4d24-86b8-f607bb9ed3b7-config-data\") pod \"nova-api-0\" (UID: \"802e4259-81ea-4d24-86b8-f607bb9ed3b7\") " pod="openstack/nova-api-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.486271 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.488482 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.506944 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/802e4259-81ea-4d24-86b8-f607bb9ed3b7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"802e4259-81ea-4d24-86b8-f607bb9ed3b7\") " pod="openstack/nova-api-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.508948 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.535048 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrmhk\" (UniqueName: \"kubernetes.io/projected/802e4259-81ea-4d24-86b8-f607bb9ed3b7-kube-api-access-zrmhk\") pod \"nova-api-0\" (UID: \"802e4259-81ea-4d24-86b8-f607bb9ed3b7\") " pod="openstack/nova-api-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.553968 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9vd5\" (UniqueName: \"kubernetes.io/projected/d4415fc6-01e5-449e-9c42-3e7e37d226bb-kube-api-access-x9vd5\") pod \"nova-scheduler-0\" (UID: \"d4415fc6-01e5-449e-9c42-3e7e37d226bb\") " pod="openstack/nova-scheduler-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.554034 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfad934d-b857-4a73-92b6-86495d75c8bd-config-data\") pod \"nova-metadata-0\" (UID: \"bfad934d-b857-4a73-92b6-86495d75c8bd\") " pod="openstack/nova-metadata-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.554066 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4415fc6-01e5-449e-9c42-3e7e37d226bb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d4415fc6-01e5-449e-9c42-3e7e37d226bb\") " pod="openstack/nova-scheduler-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.554111 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffd478fe-8ee9-4a73-bfb8-d817c50124f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ffd478fe-8ee9-4a73-bfb8-d817c50124f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.554140 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfad934d-b857-4a73-92b6-86495d75c8bd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bfad934d-b857-4a73-92b6-86495d75c8bd\") " pod="openstack/nova-metadata-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.554164 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfad934d-b857-4a73-92b6-86495d75c8bd-logs\") pod \"nova-metadata-0\" (UID: \"bfad934d-b857-4a73-92b6-86495d75c8bd\") " pod="openstack/nova-metadata-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.554247 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4hkh\" (UniqueName: \"kubernetes.io/projected/ffd478fe-8ee9-4a73-bfb8-d817c50124f1-kube-api-access-c4hkh\") pod \"nova-cell1-novncproxy-0\" (UID: \"ffd478fe-8ee9-4a73-bfb8-d817c50124f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.554266 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjj5g\" (UniqueName: \"kubernetes.io/projected/bfad934d-b857-4a73-92b6-86495d75c8bd-kube-api-access-wjj5g\") pod \"nova-metadata-0\" (UID: \"bfad934d-b857-4a73-92b6-86495d75c8bd\") " pod="openstack/nova-metadata-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.554310 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffd478fe-8ee9-4a73-bfb8-d817c50124f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ffd478fe-8ee9-4a73-bfb8-d817c50124f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.554333 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4415fc6-01e5-449e-9c42-3e7e37d226bb-config-data\") pod \"nova-scheduler-0\" (UID: \"d4415fc6-01e5-449e-9c42-3e7e37d226bb\") " pod="openstack/nova-scheduler-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.555743 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfad934d-b857-4a73-92b6-86495d75c8bd-logs\") pod \"nova-metadata-0\" (UID: \"bfad934d-b857-4a73-92b6-86495d75c8bd\") " pod="openstack/nova-metadata-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.560175 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4415fc6-01e5-449e-9c42-3e7e37d226bb-config-data\") pod \"nova-scheduler-0\" (UID: \"d4415fc6-01e5-449e-9c42-3e7e37d226bb\") " pod="openstack/nova-scheduler-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.565561 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfad934d-b857-4a73-92b6-86495d75c8bd-config-data\") pod \"nova-metadata-0\" (UID: \"bfad934d-b857-4a73-92b6-86495d75c8bd\") " pod="openstack/nova-metadata-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.567484 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfad934d-b857-4a73-92b6-86495d75c8bd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bfad934d-b857-4a73-92b6-86495d75c8bd\") " pod="openstack/nova-metadata-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.570579 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4415fc6-01e5-449e-9c42-3e7e37d226bb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d4415fc6-01e5-449e-9c42-3e7e37d226bb\") " pod="openstack/nova-scheduler-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.585418 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.593076 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9vd5\" (UniqueName: \"kubernetes.io/projected/d4415fc6-01e5-449e-9c42-3e7e37d226bb-kube-api-access-x9vd5\") pod \"nova-scheduler-0\" (UID: \"d4415fc6-01e5-449e-9c42-3e7e37d226bb\") " pod="openstack/nova-scheduler-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.599840 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.602594 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjj5g\" (UniqueName: \"kubernetes.io/projected/bfad934d-b857-4a73-92b6-86495d75c8bd-kube-api-access-wjj5g\") pod \"nova-metadata-0\" (UID: \"bfad934d-b857-4a73-92b6-86495d75c8bd\") " pod="openstack/nova-metadata-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.645267 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.657786 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4hkh\" (UniqueName: \"kubernetes.io/projected/ffd478fe-8ee9-4a73-bfb8-d817c50124f1-kube-api-access-c4hkh\") pod \"nova-cell1-novncproxy-0\" (UID: \"ffd478fe-8ee9-4a73-bfb8-d817c50124f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.658219 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffd478fe-8ee9-4a73-bfb8-d817c50124f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ffd478fe-8ee9-4a73-bfb8-d817c50124f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.658375 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffd478fe-8ee9-4a73-bfb8-d817c50124f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ffd478fe-8ee9-4a73-bfb8-d817c50124f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.671749 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffd478fe-8ee9-4a73-bfb8-d817c50124f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ffd478fe-8ee9-4a73-bfb8-d817c50124f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.676987 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffd478fe-8ee9-4a73-bfb8-d817c50124f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ffd478fe-8ee9-4a73-bfb8-d817c50124f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.678830 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4hkh\" (UniqueName: \"kubernetes.io/projected/ffd478fe-8ee9-4a73-bfb8-d817c50124f1-kube-api-access-c4hkh\") pod \"nova-cell1-novncproxy-0\" (UID: \"ffd478fe-8ee9-4a73-bfb8-d817c50124f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.691766 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-fvhqv"] Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.693538 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.763370 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-dns-svc\") pod \"dnsmasq-dns-757b4f8459-fvhqv\" (UID: \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\") " pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.763440 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-fvhqv\" (UID: \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\") " pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.763570 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hltqk\" (UniqueName: \"kubernetes.io/projected/f9a78324-b9aa-4d60-86f6-abf52adf1de4-kube-api-access-hltqk\") pod \"dnsmasq-dns-757b4f8459-fvhqv\" (UID: \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\") " pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.763704 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-fvhqv\" (UID: \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\") " pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.765456 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-fvhqv\" (UID: \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\") " pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.765696 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-config\") pod \"dnsmasq-dns-757b4f8459-fvhqv\" (UID: \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\") " pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.801193 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-fvhqv"] Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.871211 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mlbcz"] Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.873179 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mlbcz" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.878615 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.878996 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.883146 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hltqk\" (UniqueName: \"kubernetes.io/projected/f9a78324-b9aa-4d60-86f6-abf52adf1de4-kube-api-access-hltqk\") pod \"dnsmasq-dns-757b4f8459-fvhqv\" (UID: \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\") " pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.884588 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-fvhqv\" (UID: \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\") " pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.884765 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-fvhqv\" (UID: \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\") " pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.884859 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-config\") pod \"dnsmasq-dns-757b4f8459-fvhqv\" (UID: \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\") " pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.885024 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-dns-svc\") pod \"dnsmasq-dns-757b4f8459-fvhqv\" (UID: \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\") " pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.885070 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-fvhqv\" (UID: \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\") " pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.886615 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-fvhqv\" (UID: \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\") " pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.887119 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-config\") pod \"dnsmasq-dns-757b4f8459-fvhqv\" (UID: \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\") " pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.887444 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-dns-svc\") pod \"dnsmasq-dns-757b4f8459-fvhqv\" (UID: \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\") " pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.888479 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-fvhqv\" (UID: \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\") " pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.889020 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mlbcz"] Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.890855 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-fvhqv\" (UID: \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\") " pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.891457 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.917950 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:27:33 crc kubenswrapper[4904]: I0223 10:27:33.919559 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hltqk\" (UniqueName: \"kubernetes.io/projected/f9a78324-b9aa-4d60-86f6-abf52adf1de4-kube-api-access-hltqk\") pod \"dnsmasq-dns-757b4f8459-fvhqv\" (UID: \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\") " pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" Feb 23 10:27:34 crc kubenswrapper[4904]: I0223 10:27:33.998234 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mlbcz\" (UID: \"2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe\") " pod="openstack/nova-cell1-conductor-db-sync-mlbcz" Feb 23 10:27:34 crc kubenswrapper[4904]: I0223 10:27:33.998392 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8phjb\" (UniqueName: \"kubernetes.io/projected/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe-kube-api-access-8phjb\") pod \"nova-cell1-conductor-db-sync-mlbcz\" (UID: \"2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe\") " pod="openstack/nova-cell1-conductor-db-sync-mlbcz" Feb 23 10:27:34 crc kubenswrapper[4904]: I0223 10:27:34.004847 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe-scripts\") pod \"nova-cell1-conductor-db-sync-mlbcz\" (UID: \"2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe\") " pod="openstack/nova-cell1-conductor-db-sync-mlbcz" Feb 23 10:27:34 crc kubenswrapper[4904]: I0223 10:27:34.004988 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe-config-data\") pod \"nova-cell1-conductor-db-sync-mlbcz\" (UID: \"2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe\") " pod="openstack/nova-cell1-conductor-db-sync-mlbcz" Feb 23 10:27:34 crc kubenswrapper[4904]: I0223 10:27:34.047477 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" Feb 23 10:27:34 crc kubenswrapper[4904]: I0223 10:27:34.112832 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mlbcz\" (UID: \"2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe\") " pod="openstack/nova-cell1-conductor-db-sync-mlbcz" Feb 23 10:27:34 crc kubenswrapper[4904]: I0223 10:27:34.112926 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8phjb\" (UniqueName: \"kubernetes.io/projected/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe-kube-api-access-8phjb\") pod \"nova-cell1-conductor-db-sync-mlbcz\" (UID: \"2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe\") " pod="openstack/nova-cell1-conductor-db-sync-mlbcz" Feb 23 10:27:34 crc kubenswrapper[4904]: I0223 10:27:34.113029 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe-scripts\") pod \"nova-cell1-conductor-db-sync-mlbcz\" (UID: \"2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe\") " pod="openstack/nova-cell1-conductor-db-sync-mlbcz" Feb 23 10:27:34 crc kubenswrapper[4904]: I0223 10:27:34.113080 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe-config-data\") pod \"nova-cell1-conductor-db-sync-mlbcz\" (UID: \"2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe\") " pod="openstack/nova-cell1-conductor-db-sync-mlbcz" Feb 23 10:27:34 crc kubenswrapper[4904]: I0223 10:27:34.116139 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4cq2r"] Feb 23 10:27:34 crc kubenswrapper[4904]: I0223 10:27:34.138544 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe-config-data\") pod \"nova-cell1-conductor-db-sync-mlbcz\" (UID: \"2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe\") " pod="openstack/nova-cell1-conductor-db-sync-mlbcz" Feb 23 10:27:34 crc kubenswrapper[4904]: I0223 10:27:34.138616 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe-scripts\") pod \"nova-cell1-conductor-db-sync-mlbcz\" (UID: \"2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe\") " pod="openstack/nova-cell1-conductor-db-sync-mlbcz" Feb 23 10:27:34 crc kubenswrapper[4904]: I0223 10:27:34.141742 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mlbcz\" (UID: \"2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe\") " pod="openstack/nova-cell1-conductor-db-sync-mlbcz" Feb 23 10:27:34 crc kubenswrapper[4904]: I0223 10:27:34.145958 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8phjb\" (UniqueName: \"kubernetes.io/projected/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe-kube-api-access-8phjb\") pod \"nova-cell1-conductor-db-sync-mlbcz\" (UID: \"2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe\") " pod="openstack/nova-cell1-conductor-db-sync-mlbcz" Feb 23 10:27:34 crc kubenswrapper[4904]: I0223 10:27:34.148647 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 10:27:34 crc kubenswrapper[4904]: I0223 10:27:34.187945 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"802e4259-81ea-4d24-86b8-f607bb9ed3b7","Type":"ContainerStarted","Data":"29ca8cf808cd1c137588f12c539e48f72bfbd804f5a34609b611bbd93868568a"} Feb 23 10:27:34 crc kubenswrapper[4904]: I0223 10:27:34.209400 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4cq2r" event={"ID":"3b8e681f-ebb7-4bdb-b592-2636fab0d8b1","Type":"ContainerStarted","Data":"99a2e7daa817901a194932dc5b672a6432e858df839ecbe42090712ab27be2d7"} Feb 23 10:27:34 crc kubenswrapper[4904]: I0223 10:27:34.212654 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mlbcz" Feb 23 10:27:34 crc kubenswrapper[4904]: I0223 10:27:34.472432 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 10:27:34 crc kubenswrapper[4904]: W0223 10:27:34.484403 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfad934d_b857_4a73_92b6_86495d75c8bd.slice/crio-598ee010d858bb6d45ea7e4416ceb80b91c04b1de296b13cff827bb5955af10e WatchSource:0}: Error finding container 598ee010d858bb6d45ea7e4416ceb80b91c04b1de296b13cff827bb5955af10e: Status 404 returned error can't find the container with id 598ee010d858bb6d45ea7e4416ceb80b91c04b1de296b13cff827bb5955af10e Feb 23 10:27:34 crc kubenswrapper[4904]: I0223 10:27:34.649177 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 10:27:34 crc kubenswrapper[4904]: W0223 10:27:34.655847 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4415fc6_01e5_449e_9c42_3e7e37d226bb.slice/crio-e902e93b52bfa0206791ce2ed7c308a9d54d89baf3bf61760c8f1c7763f3bd60 WatchSource:0}: Error finding container e902e93b52bfa0206791ce2ed7c308a9d54d89baf3bf61760c8f1c7763f3bd60: Status 404 returned error can't find the container with id e902e93b52bfa0206791ce2ed7c308a9d54d89baf3bf61760c8f1c7763f3bd60 Feb 23 10:27:34 crc kubenswrapper[4904]: W0223 10:27:34.659001 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffd478fe_8ee9_4a73_bfb8_d817c50124f1.slice/crio-981c123598e57e03400a1d1dbbf0b32fd0711711c5d58c7c414996f94c8132ca WatchSource:0}: Error finding container 981c123598e57e03400a1d1dbbf0b32fd0711711c5d58c7c414996f94c8132ca: Status 404 returned error can't find the container with id 981c123598e57e03400a1d1dbbf0b32fd0711711c5d58c7c414996f94c8132ca Feb 23 10:27:34 crc kubenswrapper[4904]: I0223 10:27:34.665845 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 10:27:34 crc kubenswrapper[4904]: I0223 10:27:34.914505 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-fvhqv"] Feb 23 10:27:34 crc kubenswrapper[4904]: I0223 10:27:34.930915 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mlbcz"] Feb 23 10:27:34 crc kubenswrapper[4904]: W0223 10:27:34.932927 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ce0a6f6_befd_45fe_a0c2_861c6bb3e7fe.slice/crio-298b520ee5017cb91649cf40e897e6c2c27b2016ee26cd9fc2919aa5b13762ac WatchSource:0}: Error finding container 298b520ee5017cb91649cf40e897e6c2c27b2016ee26cd9fc2919aa5b13762ac: Status 404 returned error can't find the container with id 298b520ee5017cb91649cf40e897e6c2c27b2016ee26cd9fc2919aa5b13762ac Feb 23 10:27:35 crc kubenswrapper[4904]: I0223 10:27:35.225045 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ffd478fe-8ee9-4a73-bfb8-d817c50124f1","Type":"ContainerStarted","Data":"981c123598e57e03400a1d1dbbf0b32fd0711711c5d58c7c414996f94c8132ca"} Feb 23 10:27:35 crc kubenswrapper[4904]: I0223 10:27:35.232123 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4cq2r" event={"ID":"3b8e681f-ebb7-4bdb-b592-2636fab0d8b1","Type":"ContainerStarted","Data":"c72c16fe1fab54fb1580a46f66f4d7bc93cc4a9f54e5a522203dc428b734cc18"} Feb 23 10:27:35 crc kubenswrapper[4904]: I0223 10:27:35.234869 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d4415fc6-01e5-449e-9c42-3e7e37d226bb","Type":"ContainerStarted","Data":"e902e93b52bfa0206791ce2ed7c308a9d54d89baf3bf61760c8f1c7763f3bd60"} Feb 23 10:27:35 crc kubenswrapper[4904]: I0223 10:27:35.238020 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfad934d-b857-4a73-92b6-86495d75c8bd","Type":"ContainerStarted","Data":"598ee010d858bb6d45ea7e4416ceb80b91c04b1de296b13cff827bb5955af10e"} Feb 23 10:27:35 crc kubenswrapper[4904]: I0223 10:27:35.248285 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mlbcz" event={"ID":"2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe","Type":"ContainerStarted","Data":"ab6d1e25402eaaacefc9793360b9f580e6dcf0e908956ab7d4f44b4187f5251e"} Feb 23 10:27:35 crc kubenswrapper[4904]: I0223 10:27:35.248343 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mlbcz" event={"ID":"2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe","Type":"ContainerStarted","Data":"298b520ee5017cb91649cf40e897e6c2c27b2016ee26cd9fc2919aa5b13762ac"} Feb 23 10:27:35 crc kubenswrapper[4904]: I0223 10:27:35.266439 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-4cq2r" podStartSLOduration=3.266404812 podStartE2EDuration="3.266404812s" podCreationTimestamp="2026-02-23 10:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:27:35.248145892 +0000 UTC m=+1288.668519405" watchObservedRunningTime="2026-02-23 10:27:35.266404812 +0000 UTC m=+1288.686778325" Feb 23 10:27:35 crc kubenswrapper[4904]: I0223 10:27:35.270016 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" event={"ID":"f9a78324-b9aa-4d60-86f6-abf52adf1de4","Type":"ContainerStarted","Data":"7a750c2c87b1b8286eb45732f2675051130642e3f80db7964f2d0acf05e12d11"} Feb 23 10:27:35 crc kubenswrapper[4904]: I0223 10:27:35.270071 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" event={"ID":"f9a78324-b9aa-4d60-86f6-abf52adf1de4","Type":"ContainerStarted","Data":"159ee20b0e3a3ecc9ca6f59a8bc66a3900bd1360e94ceaaae6b75fa8edfdda58"} Feb 23 10:27:35 crc kubenswrapper[4904]: I0223 10:27:35.280750 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-mlbcz" podStartSLOduration=2.280735249 podStartE2EDuration="2.280735249s" podCreationTimestamp="2026-02-23 10:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:27:35.275219822 +0000 UTC m=+1288.695593345" watchObservedRunningTime="2026-02-23 10:27:35.280735249 +0000 UTC m=+1288.701108762" Feb 23 10:27:36 crc kubenswrapper[4904]: I0223 10:27:36.278912 4904 generic.go:334] "Generic (PLEG): container finished" podID="f9a78324-b9aa-4d60-86f6-abf52adf1de4" containerID="7a750c2c87b1b8286eb45732f2675051130642e3f80db7964f2d0acf05e12d11" exitCode=0 Feb 23 10:27:36 crc kubenswrapper[4904]: I0223 10:27:36.279263 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" event={"ID":"f9a78324-b9aa-4d60-86f6-abf52adf1de4","Type":"ContainerDied","Data":"7a750c2c87b1b8286eb45732f2675051130642e3f80db7964f2d0acf05e12d11"} Feb 23 10:27:36 crc kubenswrapper[4904]: I0223 10:27:36.279858 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" Feb 23 10:27:36 crc kubenswrapper[4904]: I0223 10:27:36.279874 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" event={"ID":"f9a78324-b9aa-4d60-86f6-abf52adf1de4","Type":"ContainerStarted","Data":"494425e1f3cd9dc313d0142e1f031250fae2626080d819cf5c6589ad36a93342"} Feb 23 10:27:36 crc kubenswrapper[4904]: I0223 10:27:36.307038 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" podStartSLOduration=3.3070136 podStartE2EDuration="3.3070136s" podCreationTimestamp="2026-02-23 10:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:27:36.302931144 +0000 UTC m=+1289.723304697" watchObservedRunningTime="2026-02-23 10:27:36.3070136 +0000 UTC m=+1289.727387113" Feb 23 10:27:37 crc kubenswrapper[4904]: I0223 10:27:37.463231 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 10:27:37 crc kubenswrapper[4904]: I0223 10:27:37.481465 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 10:27:39 crc kubenswrapper[4904]: I0223 10:27:39.353787 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d4415fc6-01e5-449e-9c42-3e7e37d226bb","Type":"ContainerStarted","Data":"3cb7ad408bbbe055c46a0ce689229dc52a792eed7dea11d25cc4d86e22a9aaaf"} Feb 23 10:27:39 crc kubenswrapper[4904]: I0223 10:27:39.369151 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfad934d-b857-4a73-92b6-86495d75c8bd","Type":"ContainerStarted","Data":"5356af9f7f1a0c291da51bcae67de08ec51d38e2880ff1b4af10882940fb181e"} Feb 23 10:27:39 crc kubenswrapper[4904]: I0223 10:27:39.369504 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfad934d-b857-4a73-92b6-86495d75c8bd","Type":"ContainerStarted","Data":"97bfbc424cedd24ff3fa1fe46217d39f6457e5619cefa5a9afe567cf2eb76635"} Feb 23 10:27:39 crc kubenswrapper[4904]: I0223 10:27:39.369753 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bfad934d-b857-4a73-92b6-86495d75c8bd" containerName="nova-metadata-log" containerID="cri-o://97bfbc424cedd24ff3fa1fe46217d39f6457e5619cefa5a9afe567cf2eb76635" gracePeriod=30 Feb 23 10:27:39 crc kubenswrapper[4904]: I0223 10:27:39.369944 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bfad934d-b857-4a73-92b6-86495d75c8bd" containerName="nova-metadata-metadata" containerID="cri-o://5356af9f7f1a0c291da51bcae67de08ec51d38e2880ff1b4af10882940fb181e" gracePeriod=30 Feb 23 10:27:39 crc kubenswrapper[4904]: I0223 10:27:39.379878 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"802e4259-81ea-4d24-86b8-f607bb9ed3b7","Type":"ContainerStarted","Data":"f881512fa121667304f4aa20732910cee9f687bf22ce0f8457a7968f0192f2e2"} Feb 23 10:27:39 crc kubenswrapper[4904]: I0223 10:27:39.380164 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"802e4259-81ea-4d24-86b8-f607bb9ed3b7","Type":"ContainerStarted","Data":"f93a7defda0bca9c2d478c580efe945c2f5369312ad818f56eedab10ea8a1603"} Feb 23 10:27:39 crc kubenswrapper[4904]: I0223 10:27:39.383187 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.880537913 podStartE2EDuration="6.383165758s" podCreationTimestamp="2026-02-23 10:27:33 +0000 UTC" firstStartedPulling="2026-02-23 10:27:34.660579866 +0000 UTC m=+1288.080953379" lastFinishedPulling="2026-02-23 10:27:38.163207701 +0000 UTC m=+1291.583581224" observedRunningTime="2026-02-23 10:27:39.376001874 +0000 UTC m=+1292.796375387" watchObservedRunningTime="2026-02-23 10:27:39.383165758 +0000 UTC m=+1292.803539281" Feb 23 10:27:39 crc kubenswrapper[4904]: I0223 10:27:39.391755 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ffd478fe-8ee9-4a73-bfb8-d817c50124f1","Type":"ContainerStarted","Data":"c58af4c87621f6d5d41f901cb8735a27915abff375cef7667d7b146451d5ecdc"} Feb 23 10:27:39 crc kubenswrapper[4904]: I0223 10:27:39.391950 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="ffd478fe-8ee9-4a73-bfb8-d817c50124f1" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://c58af4c87621f6d5d41f901cb8735a27915abff375cef7667d7b146451d5ecdc" gracePeriod=30 Feb 23 10:27:39 crc kubenswrapper[4904]: I0223 10:27:39.425702 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.769544602 podStartE2EDuration="6.425681439s" podCreationTimestamp="2026-02-23 10:27:33 +0000 UTC" firstStartedPulling="2026-02-23 10:27:34.505743136 +0000 UTC m=+1287.926116649" lastFinishedPulling="2026-02-23 10:27:38.161879973 +0000 UTC m=+1291.582253486" observedRunningTime="2026-02-23 10:27:39.397240139 +0000 UTC m=+1292.817613642" watchObservedRunningTime="2026-02-23 10:27:39.425681439 +0000 UTC m=+1292.846054942" Feb 23 10:27:39 crc kubenswrapper[4904]: I0223 10:27:39.427270 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.926162353 podStartE2EDuration="6.427262904s" podCreationTimestamp="2026-02-23 10:27:33 +0000 UTC" firstStartedPulling="2026-02-23 10:27:34.663282163 +0000 UTC m=+1288.083655676" lastFinishedPulling="2026-02-23 10:27:38.164382714 +0000 UTC m=+1291.584756227" observedRunningTime="2026-02-23 10:27:39.418571646 +0000 UTC m=+1292.838945159" watchObservedRunningTime="2026-02-23 10:27:39.427262904 +0000 UTC m=+1292.847636417" Feb 23 10:27:39 crc kubenswrapper[4904]: I0223 10:27:39.442218 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.466305554 podStartE2EDuration="6.442181679s" podCreationTimestamp="2026-02-23 10:27:33 +0000 UTC" firstStartedPulling="2026-02-23 10:27:34.174689536 +0000 UTC m=+1287.595063049" lastFinishedPulling="2026-02-23 10:27:38.150565651 +0000 UTC m=+1291.570939174" observedRunningTime="2026-02-23 10:27:39.434818049 +0000 UTC m=+1292.855191572" watchObservedRunningTime="2026-02-23 10:27:39.442181679 +0000 UTC m=+1292.862555192" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.085927 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.240074 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfad934d-b857-4a73-92b6-86495d75c8bd-config-data\") pod \"bfad934d-b857-4a73-92b6-86495d75c8bd\" (UID: \"bfad934d-b857-4a73-92b6-86495d75c8bd\") " Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.240307 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfad934d-b857-4a73-92b6-86495d75c8bd-combined-ca-bundle\") pod \"bfad934d-b857-4a73-92b6-86495d75c8bd\" (UID: \"bfad934d-b857-4a73-92b6-86495d75c8bd\") " Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.240359 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfad934d-b857-4a73-92b6-86495d75c8bd-logs\") pod \"bfad934d-b857-4a73-92b6-86495d75c8bd\" (UID: \"bfad934d-b857-4a73-92b6-86495d75c8bd\") " Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.240441 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjj5g\" (UniqueName: \"kubernetes.io/projected/bfad934d-b857-4a73-92b6-86495d75c8bd-kube-api-access-wjj5g\") pod \"bfad934d-b857-4a73-92b6-86495d75c8bd\" (UID: \"bfad934d-b857-4a73-92b6-86495d75c8bd\") " Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.243201 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfad934d-b857-4a73-92b6-86495d75c8bd-logs" (OuterVolumeSpecName: "logs") pod "bfad934d-b857-4a73-92b6-86495d75c8bd" (UID: "bfad934d-b857-4a73-92b6-86495d75c8bd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.249379 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfad934d-b857-4a73-92b6-86495d75c8bd-kube-api-access-wjj5g" (OuterVolumeSpecName: "kube-api-access-wjj5g") pod "bfad934d-b857-4a73-92b6-86495d75c8bd" (UID: "bfad934d-b857-4a73-92b6-86495d75c8bd"). InnerVolumeSpecName "kube-api-access-wjj5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.276007 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfad934d-b857-4a73-92b6-86495d75c8bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfad934d-b857-4a73-92b6-86495d75c8bd" (UID: "bfad934d-b857-4a73-92b6-86495d75c8bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.276183 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfad934d-b857-4a73-92b6-86495d75c8bd-config-data" (OuterVolumeSpecName: "config-data") pod "bfad934d-b857-4a73-92b6-86495d75c8bd" (UID: "bfad934d-b857-4a73-92b6-86495d75c8bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.342920 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfad934d-b857-4a73-92b6-86495d75c8bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.342954 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfad934d-b857-4a73-92b6-86495d75c8bd-logs\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.342966 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjj5g\" (UniqueName: \"kubernetes.io/projected/bfad934d-b857-4a73-92b6-86495d75c8bd-kube-api-access-wjj5g\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.342976 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfad934d-b857-4a73-92b6-86495d75c8bd-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.404250 4904 generic.go:334] "Generic (PLEG): container finished" podID="bfad934d-b857-4a73-92b6-86495d75c8bd" containerID="5356af9f7f1a0c291da51bcae67de08ec51d38e2880ff1b4af10882940fb181e" exitCode=0 Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.404286 4904 generic.go:334] "Generic (PLEG): container finished" podID="bfad934d-b857-4a73-92b6-86495d75c8bd" containerID="97bfbc424cedd24ff3fa1fe46217d39f6457e5619cefa5a9afe567cf2eb76635" exitCode=143 Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.404341 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.405380 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfad934d-b857-4a73-92b6-86495d75c8bd","Type":"ContainerDied","Data":"5356af9f7f1a0c291da51bcae67de08ec51d38e2880ff1b4af10882940fb181e"} Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.405501 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfad934d-b857-4a73-92b6-86495d75c8bd","Type":"ContainerDied","Data":"97bfbc424cedd24ff3fa1fe46217d39f6457e5619cefa5a9afe567cf2eb76635"} Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.405566 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfad934d-b857-4a73-92b6-86495d75c8bd","Type":"ContainerDied","Data":"598ee010d858bb6d45ea7e4416ceb80b91c04b1de296b13cff827bb5955af10e"} Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.405545 4904 scope.go:117] "RemoveContainer" containerID="5356af9f7f1a0c291da51bcae67de08ec51d38e2880ff1b4af10882940fb181e" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.442150 4904 scope.go:117] "RemoveContainer" containerID="97bfbc424cedd24ff3fa1fe46217d39f6457e5619cefa5a9afe567cf2eb76635" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.454438 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.466885 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.476912 4904 scope.go:117] "RemoveContainer" containerID="5356af9f7f1a0c291da51bcae67de08ec51d38e2880ff1b4af10882940fb181e" Feb 23 10:27:40 crc kubenswrapper[4904]: E0223 10:27:40.477390 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5356af9f7f1a0c291da51bcae67de08ec51d38e2880ff1b4af10882940fb181e\": container with ID starting with 5356af9f7f1a0c291da51bcae67de08ec51d38e2880ff1b4af10882940fb181e not found: ID does not exist" containerID="5356af9f7f1a0c291da51bcae67de08ec51d38e2880ff1b4af10882940fb181e" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.477530 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5356af9f7f1a0c291da51bcae67de08ec51d38e2880ff1b4af10882940fb181e"} err="failed to get container status \"5356af9f7f1a0c291da51bcae67de08ec51d38e2880ff1b4af10882940fb181e\": rpc error: code = NotFound desc = could not find container \"5356af9f7f1a0c291da51bcae67de08ec51d38e2880ff1b4af10882940fb181e\": container with ID starting with 5356af9f7f1a0c291da51bcae67de08ec51d38e2880ff1b4af10882940fb181e not found: ID does not exist" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.477564 4904 scope.go:117] "RemoveContainer" containerID="97bfbc424cedd24ff3fa1fe46217d39f6457e5619cefa5a9afe567cf2eb76635" Feb 23 10:27:40 crc kubenswrapper[4904]: E0223 10:27:40.479266 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97bfbc424cedd24ff3fa1fe46217d39f6457e5619cefa5a9afe567cf2eb76635\": container with ID starting with 97bfbc424cedd24ff3fa1fe46217d39f6457e5619cefa5a9afe567cf2eb76635 not found: ID does not exist" containerID="97bfbc424cedd24ff3fa1fe46217d39f6457e5619cefa5a9afe567cf2eb76635" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.479297 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97bfbc424cedd24ff3fa1fe46217d39f6457e5619cefa5a9afe567cf2eb76635"} err="failed to get container status \"97bfbc424cedd24ff3fa1fe46217d39f6457e5619cefa5a9afe567cf2eb76635\": rpc error: code = NotFound desc = could not find container \"97bfbc424cedd24ff3fa1fe46217d39f6457e5619cefa5a9afe567cf2eb76635\": container with ID starting with 97bfbc424cedd24ff3fa1fe46217d39f6457e5619cefa5a9afe567cf2eb76635 not found: ID does not exist" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.479317 4904 scope.go:117] "RemoveContainer" containerID="5356af9f7f1a0c291da51bcae67de08ec51d38e2880ff1b4af10882940fb181e" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.479649 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5356af9f7f1a0c291da51bcae67de08ec51d38e2880ff1b4af10882940fb181e"} err="failed to get container status \"5356af9f7f1a0c291da51bcae67de08ec51d38e2880ff1b4af10882940fb181e\": rpc error: code = NotFound desc = could not find container \"5356af9f7f1a0c291da51bcae67de08ec51d38e2880ff1b4af10882940fb181e\": container with ID starting with 5356af9f7f1a0c291da51bcae67de08ec51d38e2880ff1b4af10882940fb181e not found: ID does not exist" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.479672 4904 scope.go:117] "RemoveContainer" containerID="97bfbc424cedd24ff3fa1fe46217d39f6457e5619cefa5a9afe567cf2eb76635" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.479883 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97bfbc424cedd24ff3fa1fe46217d39f6457e5619cefa5a9afe567cf2eb76635"} err="failed to get container status \"97bfbc424cedd24ff3fa1fe46217d39f6457e5619cefa5a9afe567cf2eb76635\": rpc error: code = NotFound desc = could not find container \"97bfbc424cedd24ff3fa1fe46217d39f6457e5619cefa5a9afe567cf2eb76635\": container with ID starting with 97bfbc424cedd24ff3fa1fe46217d39f6457e5619cefa5a9afe567cf2eb76635 not found: ID does not exist" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.489786 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 23 10:27:40 crc kubenswrapper[4904]: E0223 10:27:40.490524 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfad934d-b857-4a73-92b6-86495d75c8bd" containerName="nova-metadata-metadata" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.490606 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfad934d-b857-4a73-92b6-86495d75c8bd" containerName="nova-metadata-metadata" Feb 23 10:27:40 crc kubenswrapper[4904]: E0223 10:27:40.490738 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfad934d-b857-4a73-92b6-86495d75c8bd" containerName="nova-metadata-log" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.490795 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfad934d-b857-4a73-92b6-86495d75c8bd" containerName="nova-metadata-log" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.491046 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfad934d-b857-4a73-92b6-86495d75c8bd" containerName="nova-metadata-log" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.491899 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfad934d-b857-4a73-92b6-86495d75c8bd" containerName="nova-metadata-metadata" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.493112 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.496093 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.496374 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.507816 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 10:27:40 crc kubenswrapper[4904]: E0223 10:27:40.582191 4904 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfad934d_b857_4a73_92b6_86495d75c8bd.slice\": RecentStats: unable to find data in memory cache]" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.649607 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d60d9bf2-023c-4157-b235-edd97257c125-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d60d9bf2-023c-4157-b235-edd97257c125\") " pod="openstack/nova-metadata-0" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.649889 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5mtf\" (UniqueName: \"kubernetes.io/projected/d60d9bf2-023c-4157-b235-edd97257c125-kube-api-access-d5mtf\") pod \"nova-metadata-0\" (UID: \"d60d9bf2-023c-4157-b235-edd97257c125\") " pod="openstack/nova-metadata-0" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.650214 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d60d9bf2-023c-4157-b235-edd97257c125-config-data\") pod \"nova-metadata-0\" (UID: \"d60d9bf2-023c-4157-b235-edd97257c125\") " pod="openstack/nova-metadata-0" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.650367 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d60d9bf2-023c-4157-b235-edd97257c125-logs\") pod \"nova-metadata-0\" (UID: \"d60d9bf2-023c-4157-b235-edd97257c125\") " pod="openstack/nova-metadata-0" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.650414 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d60d9bf2-023c-4157-b235-edd97257c125-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d60d9bf2-023c-4157-b235-edd97257c125\") " pod="openstack/nova-metadata-0" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.753288 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5mtf\" (UniqueName: \"kubernetes.io/projected/d60d9bf2-023c-4157-b235-edd97257c125-kube-api-access-d5mtf\") pod \"nova-metadata-0\" (UID: \"d60d9bf2-023c-4157-b235-edd97257c125\") " pod="openstack/nova-metadata-0" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.753589 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d60d9bf2-023c-4157-b235-edd97257c125-config-data\") pod \"nova-metadata-0\" (UID: \"d60d9bf2-023c-4157-b235-edd97257c125\") " pod="openstack/nova-metadata-0" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.753698 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d60d9bf2-023c-4157-b235-edd97257c125-logs\") pod \"nova-metadata-0\" (UID: \"d60d9bf2-023c-4157-b235-edd97257c125\") " pod="openstack/nova-metadata-0" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.753757 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d60d9bf2-023c-4157-b235-edd97257c125-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d60d9bf2-023c-4157-b235-edd97257c125\") " pod="openstack/nova-metadata-0" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.753815 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d60d9bf2-023c-4157-b235-edd97257c125-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d60d9bf2-023c-4157-b235-edd97257c125\") " pod="openstack/nova-metadata-0" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.755695 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d60d9bf2-023c-4157-b235-edd97257c125-logs\") pod \"nova-metadata-0\" (UID: \"d60d9bf2-023c-4157-b235-edd97257c125\") " pod="openstack/nova-metadata-0" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.762665 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d60d9bf2-023c-4157-b235-edd97257c125-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d60d9bf2-023c-4157-b235-edd97257c125\") " pod="openstack/nova-metadata-0" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.772153 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d60d9bf2-023c-4157-b235-edd97257c125-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d60d9bf2-023c-4157-b235-edd97257c125\") " pod="openstack/nova-metadata-0" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.772627 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5mtf\" (UniqueName: \"kubernetes.io/projected/d60d9bf2-023c-4157-b235-edd97257c125-kube-api-access-d5mtf\") pod \"nova-metadata-0\" (UID: \"d60d9bf2-023c-4157-b235-edd97257c125\") " pod="openstack/nova-metadata-0" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.774264 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d60d9bf2-023c-4157-b235-edd97257c125-config-data\") pod \"nova-metadata-0\" (UID: \"d60d9bf2-023c-4157-b235-edd97257c125\") " pod="openstack/nova-metadata-0" Feb 23 10:27:40 crc kubenswrapper[4904]: I0223 10:27:40.830673 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 10:27:41 crc kubenswrapper[4904]: I0223 10:27:41.284286 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfad934d-b857-4a73-92b6-86495d75c8bd" path="/var/lib/kubelet/pods/bfad934d-b857-4a73-92b6-86495d75c8bd/volumes" Feb 23 10:27:41 crc kubenswrapper[4904]: I0223 10:27:41.389498 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 10:27:41 crc kubenswrapper[4904]: W0223 10:27:41.390319 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd60d9bf2_023c_4157_b235_edd97257c125.slice/crio-b06c449ed316762aa6b8f42f0851656a2a18a8663aff6d9ca237aca3c3a332db WatchSource:0}: Error finding container b06c449ed316762aa6b8f42f0851656a2a18a8663aff6d9ca237aca3c3a332db: Status 404 returned error can't find the container with id b06c449ed316762aa6b8f42f0851656a2a18a8663aff6d9ca237aca3c3a332db Feb 23 10:27:41 crc kubenswrapper[4904]: I0223 10:27:41.423452 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d60d9bf2-023c-4157-b235-edd97257c125","Type":"ContainerStarted","Data":"b06c449ed316762aa6b8f42f0851656a2a18a8663aff6d9ca237aca3c3a332db"} Feb 23 10:27:42 crc kubenswrapper[4904]: I0223 10:27:42.438245 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d60d9bf2-023c-4157-b235-edd97257c125","Type":"ContainerStarted","Data":"37cb15b0eeedb24db5ec2bd763bc66fcb8081fdbe64c12fd7570fa33212b9323"} Feb 23 10:27:42 crc kubenswrapper[4904]: I0223 10:27:42.440520 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d60d9bf2-023c-4157-b235-edd97257c125","Type":"ContainerStarted","Data":"c5e82d7dd89f023e011b29606a577c2611e1c45795489581f1729ddf23eae9f1"} Feb 23 10:27:42 crc kubenswrapper[4904]: I0223 10:27:42.481561 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.481530148 podStartE2EDuration="2.481530148s" podCreationTimestamp="2026-02-23 10:27:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:27:42.461419675 +0000 UTC m=+1295.881793188" watchObservedRunningTime="2026-02-23 10:27:42.481530148 +0000 UTC m=+1295.901903701" Feb 23 10:27:43 crc kubenswrapper[4904]: I0223 10:27:43.460545 4904 generic.go:334] "Generic (PLEG): container finished" podID="2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe" containerID="ab6d1e25402eaaacefc9793360b9f580e6dcf0e908956ab7d4f44b4187f5251e" exitCode=0 Feb 23 10:27:43 crc kubenswrapper[4904]: I0223 10:27:43.461059 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mlbcz" event={"ID":"2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe","Type":"ContainerDied","Data":"ab6d1e25402eaaacefc9793360b9f580e6dcf0e908956ab7d4f44b4187f5251e"} Feb 23 10:27:43 crc kubenswrapper[4904]: I0223 10:27:43.467630 4904 generic.go:334] "Generic (PLEG): container finished" podID="3b8e681f-ebb7-4bdb-b592-2636fab0d8b1" containerID="c72c16fe1fab54fb1580a46f66f4d7bc93cc4a9f54e5a522203dc428b734cc18" exitCode=0 Feb 23 10:27:43 crc kubenswrapper[4904]: I0223 10:27:43.467777 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4cq2r" event={"ID":"3b8e681f-ebb7-4bdb-b592-2636fab0d8b1","Type":"ContainerDied","Data":"c72c16fe1fab54fb1580a46f66f4d7bc93cc4a9f54e5a522203dc428b734cc18"} Feb 23 10:27:43 crc kubenswrapper[4904]: I0223 10:27:43.587073 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 10:27:43 crc kubenswrapper[4904]: I0223 10:27:43.587145 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 10:27:43 crc kubenswrapper[4904]: I0223 10:27:43.893131 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 23 10:27:43 crc kubenswrapper[4904]: I0223 10:27:43.893200 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 23 10:27:43 crc kubenswrapper[4904]: I0223 10:27:43.919440 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:27:43 crc kubenswrapper[4904]: I0223 10:27:43.931336 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.048868 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.132419 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-j9kcl"] Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.132709 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" podUID="e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8" containerName="dnsmasq-dns" containerID="cri-o://00a13767afa68fb19bc0e9ec1ab15ad8f8655756c11a201da0afdee435585226" gracePeriod=10 Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.516028 4904 generic.go:334] "Generic (PLEG): container finished" podID="e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8" containerID="00a13767afa68fb19bc0e9ec1ab15ad8f8655756c11a201da0afdee435585226" exitCode=0 Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.517186 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" event={"ID":"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8","Type":"ContainerDied","Data":"00a13767afa68fb19bc0e9ec1ab15ad8f8655756c11a201da0afdee435585226"} Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.566237 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.669091 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="802e4259-81ea-4d24-86b8-f607bb9ed3b7" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.669594 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="802e4259-81ea-4d24-86b8-f607bb9ed3b7" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.730109 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.862900 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-dns-svc\") pod \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\" (UID: \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\") " Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.863044 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgbrb\" (UniqueName: \"kubernetes.io/projected/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-kube-api-access-kgbrb\") pod \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\" (UID: \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\") " Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.863116 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-ovsdbserver-sb\") pod \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\" (UID: \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\") " Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.863151 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-config\") pod \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\" (UID: \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\") " Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.863233 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-ovsdbserver-nb\") pod \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\" (UID: \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\") " Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.865882 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-dns-swift-storage-0\") pod \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\" (UID: \"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8\") " Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.884141 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-kube-api-access-kgbrb" (OuterVolumeSpecName: "kube-api-access-kgbrb") pod "e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8" (UID: "e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8"). InnerVolumeSpecName "kube-api-access-kgbrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.945072 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8" (UID: "e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.946551 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-config" (OuterVolumeSpecName: "config") pod "e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8" (UID: "e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.969514 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8" (UID: "e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.970181 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8" (UID: "e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.992283 4904 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.992348 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.992362 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgbrb\" (UniqueName: \"kubernetes.io/projected/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-kube-api-access-kgbrb\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.992377 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:44 crc kubenswrapper[4904]: I0223 10:27:44.992396 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.039418 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8" (UID: "e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.045160 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mlbcz" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.068057 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4cq2r" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.095398 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.196618 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1-config-data\") pod \"3b8e681f-ebb7-4bdb-b592-2636fab0d8b1\" (UID: \"3b8e681f-ebb7-4bdb-b592-2636fab0d8b1\") " Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.196663 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1-combined-ca-bundle\") pod \"3b8e681f-ebb7-4bdb-b592-2636fab0d8b1\" (UID: \"3b8e681f-ebb7-4bdb-b592-2636fab0d8b1\") " Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.196782 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe-combined-ca-bundle\") pod \"2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe\" (UID: \"2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe\") " Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.196897 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe-scripts\") pod \"2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe\" (UID: \"2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe\") " Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.196929 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1-scripts\") pod \"3b8e681f-ebb7-4bdb-b592-2636fab0d8b1\" (UID: \"3b8e681f-ebb7-4bdb-b592-2636fab0d8b1\") " Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.196958 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8phjb\" (UniqueName: \"kubernetes.io/projected/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe-kube-api-access-8phjb\") pod \"2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe\" (UID: \"2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe\") " Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.196996 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2kdg\" (UniqueName: \"kubernetes.io/projected/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1-kube-api-access-c2kdg\") pod \"3b8e681f-ebb7-4bdb-b592-2636fab0d8b1\" (UID: \"3b8e681f-ebb7-4bdb-b592-2636fab0d8b1\") " Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.197024 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe-config-data\") pod \"2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe\" (UID: \"2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe\") " Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.201361 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1-scripts" (OuterVolumeSpecName: "scripts") pod "3b8e681f-ebb7-4bdb-b592-2636fab0d8b1" (UID: "3b8e681f-ebb7-4bdb-b592-2636fab0d8b1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.202638 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe-kube-api-access-8phjb" (OuterVolumeSpecName: "kube-api-access-8phjb") pod "2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe" (UID: "2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe"). InnerVolumeSpecName "kube-api-access-8phjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.204620 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe-scripts" (OuterVolumeSpecName: "scripts") pod "2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe" (UID: "2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.207966 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1-kube-api-access-c2kdg" (OuterVolumeSpecName: "kube-api-access-c2kdg") pod "3b8e681f-ebb7-4bdb-b592-2636fab0d8b1" (UID: "3b8e681f-ebb7-4bdb-b592-2636fab0d8b1"). InnerVolumeSpecName "kube-api-access-c2kdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.227080 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1-config-data" (OuterVolumeSpecName: "config-data") pod "3b8e681f-ebb7-4bdb-b592-2636fab0d8b1" (UID: "3b8e681f-ebb7-4bdb-b592-2636fab0d8b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.231748 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b8e681f-ebb7-4bdb-b592-2636fab0d8b1" (UID: "3b8e681f-ebb7-4bdb-b592-2636fab0d8b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.232964 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe" (UID: "2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.253033 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe-config-data" (OuterVolumeSpecName: "config-data") pod "2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe" (UID: "2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.300117 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2kdg\" (UniqueName: \"kubernetes.io/projected/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1-kube-api-access-c2kdg\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.300456 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.301164 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.301273 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.301405 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.301488 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.301581 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.301656 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8phjb\" (UniqueName: \"kubernetes.io/projected/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe-kube-api-access-8phjb\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.529201 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mlbcz" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.529193 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mlbcz" event={"ID":"2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe","Type":"ContainerDied","Data":"298b520ee5017cb91649cf40e897e6c2c27b2016ee26cd9fc2919aa5b13762ac"} Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.530360 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="298b520ee5017cb91649cf40e897e6c2c27b2016ee26cd9fc2919aa5b13762ac" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.533965 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4cq2r" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.533953 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4cq2r" event={"ID":"3b8e681f-ebb7-4bdb-b592-2636fab0d8b1","Type":"ContainerDied","Data":"99a2e7daa817901a194932dc5b672a6432e858df839ecbe42090712ab27be2d7"} Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.534128 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99a2e7daa817901a194932dc5b672a6432e858df839ecbe42090712ab27be2d7" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.537427 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" event={"ID":"e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8","Type":"ContainerDied","Data":"8ef7ac5716479e81344298bb908afe5f754772ec29520c733d75c9bd135f4527"} Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.537493 4904 scope.go:117] "RemoveContainer" containerID="00a13767afa68fb19bc0e9ec1ab15ad8f8655756c11a201da0afdee435585226" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.537514 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-j9kcl" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.574433 4904 scope.go:117] "RemoveContainer" containerID="c076d0c6ee0ed5b336708911364ec17de0aa89279e11b0f9842757a95cc2b772" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.609805 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-j9kcl"] Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.622192 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-j9kcl"] Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.642940 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 23 10:27:45 crc kubenswrapper[4904]: E0223 10:27:45.643493 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe" containerName="nova-cell1-conductor-db-sync" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.643512 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe" containerName="nova-cell1-conductor-db-sync" Feb 23 10:27:45 crc kubenswrapper[4904]: E0223 10:27:45.643544 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8e681f-ebb7-4bdb-b592-2636fab0d8b1" containerName="nova-manage" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.643550 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8e681f-ebb7-4bdb-b592-2636fab0d8b1" containerName="nova-manage" Feb 23 10:27:45 crc kubenswrapper[4904]: E0223 10:27:45.643561 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8" containerName="dnsmasq-dns" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.643567 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8" containerName="dnsmasq-dns" Feb 23 10:27:45 crc kubenswrapper[4904]: E0223 10:27:45.643582 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8" containerName="init" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.643587 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8" containerName="init" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.643865 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b8e681f-ebb7-4bdb-b592-2636fab0d8b1" containerName="nova-manage" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.643880 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe" containerName="nova-cell1-conductor-db-sync" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.643894 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8" containerName="dnsmasq-dns" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.644672 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.648310 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.683489 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.710759 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf4kx\" (UniqueName: \"kubernetes.io/projected/1c8e85f3-888e-4a20-a6be-bed2b85f2b45-kube-api-access-xf4kx\") pod \"nova-cell1-conductor-0\" (UID: \"1c8e85f3-888e-4a20-a6be-bed2b85f2b45\") " pod="openstack/nova-cell1-conductor-0" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.710844 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c8e85f3-888e-4a20-a6be-bed2b85f2b45-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1c8e85f3-888e-4a20-a6be-bed2b85f2b45\") " pod="openstack/nova-cell1-conductor-0" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.710872 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c8e85f3-888e-4a20-a6be-bed2b85f2b45-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1c8e85f3-888e-4a20-a6be-bed2b85f2b45\") " pod="openstack/nova-cell1-conductor-0" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.738326 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.738661 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="802e4259-81ea-4d24-86b8-f607bb9ed3b7" containerName="nova-api-log" containerID="cri-o://f93a7defda0bca9c2d478c580efe945c2f5369312ad818f56eedab10ea8a1603" gracePeriod=30 Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.739183 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="802e4259-81ea-4d24-86b8-f607bb9ed3b7" containerName="nova-api-api" containerID="cri-o://f881512fa121667304f4aa20732910cee9f687bf22ce0f8457a7968f0192f2e2" gracePeriod=30 Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.751790 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.763262 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.763528 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d60d9bf2-023c-4157-b235-edd97257c125" containerName="nova-metadata-log" containerID="cri-o://c5e82d7dd89f023e011b29606a577c2611e1c45795489581f1729ddf23eae9f1" gracePeriod=30 Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.764056 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d60d9bf2-023c-4157-b235-edd97257c125" containerName="nova-metadata-metadata" containerID="cri-o://37cb15b0eeedb24db5ec2bd763bc66fcb8081fdbe64c12fd7570fa33212b9323" gracePeriod=30 Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.816243 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf4kx\" (UniqueName: \"kubernetes.io/projected/1c8e85f3-888e-4a20-a6be-bed2b85f2b45-kube-api-access-xf4kx\") pod \"nova-cell1-conductor-0\" (UID: \"1c8e85f3-888e-4a20-a6be-bed2b85f2b45\") " pod="openstack/nova-cell1-conductor-0" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.816446 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c8e85f3-888e-4a20-a6be-bed2b85f2b45-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1c8e85f3-888e-4a20-a6be-bed2b85f2b45\") " pod="openstack/nova-cell1-conductor-0" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.816476 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c8e85f3-888e-4a20-a6be-bed2b85f2b45-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1c8e85f3-888e-4a20-a6be-bed2b85f2b45\") " pod="openstack/nova-cell1-conductor-0" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.821759 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c8e85f3-888e-4a20-a6be-bed2b85f2b45-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1c8e85f3-888e-4a20-a6be-bed2b85f2b45\") " pod="openstack/nova-cell1-conductor-0" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.826231 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c8e85f3-888e-4a20-a6be-bed2b85f2b45-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1c8e85f3-888e-4a20-a6be-bed2b85f2b45\") " pod="openstack/nova-cell1-conductor-0" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.832495 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.832531 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.838265 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf4kx\" (UniqueName: \"kubernetes.io/projected/1c8e85f3-888e-4a20-a6be-bed2b85f2b45-kube-api-access-xf4kx\") pod \"nova-cell1-conductor-0\" (UID: \"1c8e85f3-888e-4a20-a6be-bed2b85f2b45\") " pod="openstack/nova-cell1-conductor-0" Feb 23 10:27:45 crc kubenswrapper[4904]: I0223 10:27:45.991007 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.410122 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.533877 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d60d9bf2-023c-4157-b235-edd97257c125-logs\") pod \"d60d9bf2-023c-4157-b235-edd97257c125\" (UID: \"d60d9bf2-023c-4157-b235-edd97257c125\") " Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.533929 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5mtf\" (UniqueName: \"kubernetes.io/projected/d60d9bf2-023c-4157-b235-edd97257c125-kube-api-access-d5mtf\") pod \"d60d9bf2-023c-4157-b235-edd97257c125\" (UID: \"d60d9bf2-023c-4157-b235-edd97257c125\") " Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.533985 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d60d9bf2-023c-4157-b235-edd97257c125-nova-metadata-tls-certs\") pod \"d60d9bf2-023c-4157-b235-edd97257c125\" (UID: \"d60d9bf2-023c-4157-b235-edd97257c125\") " Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.534010 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d60d9bf2-023c-4157-b235-edd97257c125-config-data\") pod \"d60d9bf2-023c-4157-b235-edd97257c125\" (UID: \"d60d9bf2-023c-4157-b235-edd97257c125\") " Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.534242 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d60d9bf2-023c-4157-b235-edd97257c125-combined-ca-bundle\") pod \"d60d9bf2-023c-4157-b235-edd97257c125\" (UID: \"d60d9bf2-023c-4157-b235-edd97257c125\") " Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.534348 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d60d9bf2-023c-4157-b235-edd97257c125-logs" (OuterVolumeSpecName: "logs") pod "d60d9bf2-023c-4157-b235-edd97257c125" (UID: "d60d9bf2-023c-4157-b235-edd97257c125"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.534793 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d60d9bf2-023c-4157-b235-edd97257c125-logs\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.549982 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d60d9bf2-023c-4157-b235-edd97257c125-kube-api-access-d5mtf" (OuterVolumeSpecName: "kube-api-access-d5mtf") pod "d60d9bf2-023c-4157-b235-edd97257c125" (UID: "d60d9bf2-023c-4157-b235-edd97257c125"). InnerVolumeSpecName "kube-api-access-d5mtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.565944 4904 generic.go:334] "Generic (PLEG): container finished" podID="802e4259-81ea-4d24-86b8-f607bb9ed3b7" containerID="f93a7defda0bca9c2d478c580efe945c2f5369312ad818f56eedab10ea8a1603" exitCode=143 Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.566036 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"802e4259-81ea-4d24-86b8-f607bb9ed3b7","Type":"ContainerDied","Data":"f93a7defda0bca9c2d478c580efe945c2f5369312ad818f56eedab10ea8a1603"} Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.571332 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d60d9bf2-023c-4157-b235-edd97257c125-config-data" (OuterVolumeSpecName: "config-data") pod "d60d9bf2-023c-4157-b235-edd97257c125" (UID: "d60d9bf2-023c-4157-b235-edd97257c125"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.578771 4904 generic.go:334] "Generic (PLEG): container finished" podID="d60d9bf2-023c-4157-b235-edd97257c125" containerID="37cb15b0eeedb24db5ec2bd763bc66fcb8081fdbe64c12fd7570fa33212b9323" exitCode=0 Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.578806 4904 generic.go:334] "Generic (PLEG): container finished" podID="d60d9bf2-023c-4157-b235-edd97257c125" containerID="c5e82d7dd89f023e011b29606a577c2611e1c45795489581f1729ddf23eae9f1" exitCode=143 Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.579217 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d4415fc6-01e5-449e-9c42-3e7e37d226bb" containerName="nova-scheduler-scheduler" containerID="cri-o://3cb7ad408bbbe055c46a0ce689229dc52a792eed7dea11d25cc4d86e22a9aaaf" gracePeriod=30 Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.579767 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.579936 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d60d9bf2-023c-4157-b235-edd97257c125","Type":"ContainerDied","Data":"37cb15b0eeedb24db5ec2bd763bc66fcb8081fdbe64c12fd7570fa33212b9323"} Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.580001 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d60d9bf2-023c-4157-b235-edd97257c125","Type":"ContainerDied","Data":"c5e82d7dd89f023e011b29606a577c2611e1c45795489581f1729ddf23eae9f1"} Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.580018 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d60d9bf2-023c-4157-b235-edd97257c125","Type":"ContainerDied","Data":"b06c449ed316762aa6b8f42f0851656a2a18a8663aff6d9ca237aca3c3a332db"} Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.580347 4904 scope.go:117] "RemoveContainer" containerID="37cb15b0eeedb24db5ec2bd763bc66fcb8081fdbe64c12fd7570fa33212b9323" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.596078 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d60d9bf2-023c-4157-b235-edd97257c125-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d60d9bf2-023c-4157-b235-edd97257c125" (UID: "d60d9bf2-023c-4157-b235-edd97257c125"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.631741 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d60d9bf2-023c-4157-b235-edd97257c125-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d60d9bf2-023c-4157-b235-edd97257c125" (UID: "d60d9bf2-023c-4157-b235-edd97257c125"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.637068 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d60d9bf2-023c-4157-b235-edd97257c125-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.637099 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5mtf\" (UniqueName: \"kubernetes.io/projected/d60d9bf2-023c-4157-b235-edd97257c125-kube-api-access-d5mtf\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.637126 4904 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d60d9bf2-023c-4157-b235-edd97257c125-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.637138 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d60d9bf2-023c-4157-b235-edd97257c125-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:46 crc kubenswrapper[4904]: W0223 10:27:46.638850 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c8e85f3_888e_4a20_a6be_bed2b85f2b45.slice/crio-7ed2251a0fb41c47c2b46b2b873601b449044ca726e16fe341f35bb3742ea4b2 WatchSource:0}: Error finding container 7ed2251a0fb41c47c2b46b2b873601b449044ca726e16fe341f35bb3742ea4b2: Status 404 returned error can't find the container with id 7ed2251a0fb41c47c2b46b2b873601b449044ca726e16fe341f35bb3742ea4b2 Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.642938 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.665459 4904 scope.go:117] "RemoveContainer" containerID="c5e82d7dd89f023e011b29606a577c2611e1c45795489581f1729ddf23eae9f1" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.703390 4904 scope.go:117] "RemoveContainer" containerID="37cb15b0eeedb24db5ec2bd763bc66fcb8081fdbe64c12fd7570fa33212b9323" Feb 23 10:27:46 crc kubenswrapper[4904]: E0223 10:27:46.703696 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37cb15b0eeedb24db5ec2bd763bc66fcb8081fdbe64c12fd7570fa33212b9323\": container with ID starting with 37cb15b0eeedb24db5ec2bd763bc66fcb8081fdbe64c12fd7570fa33212b9323 not found: ID does not exist" containerID="37cb15b0eeedb24db5ec2bd763bc66fcb8081fdbe64c12fd7570fa33212b9323" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.703749 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37cb15b0eeedb24db5ec2bd763bc66fcb8081fdbe64c12fd7570fa33212b9323"} err="failed to get container status \"37cb15b0eeedb24db5ec2bd763bc66fcb8081fdbe64c12fd7570fa33212b9323\": rpc error: code = NotFound desc = could not find container \"37cb15b0eeedb24db5ec2bd763bc66fcb8081fdbe64c12fd7570fa33212b9323\": container with ID starting with 37cb15b0eeedb24db5ec2bd763bc66fcb8081fdbe64c12fd7570fa33212b9323 not found: ID does not exist" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.703772 4904 scope.go:117] "RemoveContainer" containerID="c5e82d7dd89f023e011b29606a577c2611e1c45795489581f1729ddf23eae9f1" Feb 23 10:27:46 crc kubenswrapper[4904]: E0223 10:27:46.704175 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5e82d7dd89f023e011b29606a577c2611e1c45795489581f1729ddf23eae9f1\": container with ID starting with c5e82d7dd89f023e011b29606a577c2611e1c45795489581f1729ddf23eae9f1 not found: ID does not exist" containerID="c5e82d7dd89f023e011b29606a577c2611e1c45795489581f1729ddf23eae9f1" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.704226 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5e82d7dd89f023e011b29606a577c2611e1c45795489581f1729ddf23eae9f1"} err="failed to get container status \"c5e82d7dd89f023e011b29606a577c2611e1c45795489581f1729ddf23eae9f1\": rpc error: code = NotFound desc = could not find container \"c5e82d7dd89f023e011b29606a577c2611e1c45795489581f1729ddf23eae9f1\": container with ID starting with c5e82d7dd89f023e011b29606a577c2611e1c45795489581f1729ddf23eae9f1 not found: ID does not exist" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.704272 4904 scope.go:117] "RemoveContainer" containerID="37cb15b0eeedb24db5ec2bd763bc66fcb8081fdbe64c12fd7570fa33212b9323" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.704925 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37cb15b0eeedb24db5ec2bd763bc66fcb8081fdbe64c12fd7570fa33212b9323"} err="failed to get container status \"37cb15b0eeedb24db5ec2bd763bc66fcb8081fdbe64c12fd7570fa33212b9323\": rpc error: code = NotFound desc = could not find container \"37cb15b0eeedb24db5ec2bd763bc66fcb8081fdbe64c12fd7570fa33212b9323\": container with ID starting with 37cb15b0eeedb24db5ec2bd763bc66fcb8081fdbe64c12fd7570fa33212b9323 not found: ID does not exist" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.704949 4904 scope.go:117] "RemoveContainer" containerID="c5e82d7dd89f023e011b29606a577c2611e1c45795489581f1729ddf23eae9f1" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.705129 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5e82d7dd89f023e011b29606a577c2611e1c45795489581f1729ddf23eae9f1"} err="failed to get container status \"c5e82d7dd89f023e011b29606a577c2611e1c45795489581f1729ddf23eae9f1\": rpc error: code = NotFound desc = could not find container \"c5e82d7dd89f023e011b29606a577c2611e1c45795489581f1729ddf23eae9f1\": container with ID starting with c5e82d7dd89f023e011b29606a577c2611e1c45795489581f1729ddf23eae9f1 not found: ID does not exist" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.943579 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.957264 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.970612 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 23 10:27:46 crc kubenswrapper[4904]: E0223 10:27:46.971289 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d60d9bf2-023c-4157-b235-edd97257c125" containerName="nova-metadata-log" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.971317 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d60d9bf2-023c-4157-b235-edd97257c125" containerName="nova-metadata-log" Feb 23 10:27:46 crc kubenswrapper[4904]: E0223 10:27:46.971373 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d60d9bf2-023c-4157-b235-edd97257c125" containerName="nova-metadata-metadata" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.971383 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d60d9bf2-023c-4157-b235-edd97257c125" containerName="nova-metadata-metadata" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.971665 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="d60d9bf2-023c-4157-b235-edd97257c125" containerName="nova-metadata-metadata" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.971697 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="d60d9bf2-023c-4157-b235-edd97257c125" containerName="nova-metadata-log" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.973311 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.976597 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.976607 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 23 10:27:46 crc kubenswrapper[4904]: I0223 10:27:46.981975 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 10:27:47 crc kubenswrapper[4904]: I0223 10:27:47.045668 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598def1d-4972-40aa-a33b-30481ae4a527-config-data\") pod \"nova-metadata-0\" (UID: \"598def1d-4972-40aa-a33b-30481ae4a527\") " pod="openstack/nova-metadata-0" Feb 23 10:27:47 crc kubenswrapper[4904]: I0223 10:27:47.045726 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598def1d-4972-40aa-a33b-30481ae4a527-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"598def1d-4972-40aa-a33b-30481ae4a527\") " pod="openstack/nova-metadata-0" Feb 23 10:27:47 crc kubenswrapper[4904]: I0223 10:27:47.045754 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/598def1d-4972-40aa-a33b-30481ae4a527-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"598def1d-4972-40aa-a33b-30481ae4a527\") " pod="openstack/nova-metadata-0" Feb 23 10:27:47 crc kubenswrapper[4904]: I0223 10:27:47.045902 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-596tr\" (UniqueName: \"kubernetes.io/projected/598def1d-4972-40aa-a33b-30481ae4a527-kube-api-access-596tr\") pod \"nova-metadata-0\" (UID: \"598def1d-4972-40aa-a33b-30481ae4a527\") " pod="openstack/nova-metadata-0" Feb 23 10:27:47 crc kubenswrapper[4904]: I0223 10:27:47.046142 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/598def1d-4972-40aa-a33b-30481ae4a527-logs\") pod \"nova-metadata-0\" (UID: \"598def1d-4972-40aa-a33b-30481ae4a527\") " pod="openstack/nova-metadata-0" Feb 23 10:27:47 crc kubenswrapper[4904]: I0223 10:27:47.148595 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598def1d-4972-40aa-a33b-30481ae4a527-config-data\") pod \"nova-metadata-0\" (UID: \"598def1d-4972-40aa-a33b-30481ae4a527\") " pod="openstack/nova-metadata-0" Feb 23 10:27:47 crc kubenswrapper[4904]: I0223 10:27:47.148640 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598def1d-4972-40aa-a33b-30481ae4a527-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"598def1d-4972-40aa-a33b-30481ae4a527\") " pod="openstack/nova-metadata-0" Feb 23 10:27:47 crc kubenswrapper[4904]: I0223 10:27:47.148666 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/598def1d-4972-40aa-a33b-30481ae4a527-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"598def1d-4972-40aa-a33b-30481ae4a527\") " pod="openstack/nova-metadata-0" Feb 23 10:27:47 crc kubenswrapper[4904]: I0223 10:27:47.148724 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-596tr\" (UniqueName: \"kubernetes.io/projected/598def1d-4972-40aa-a33b-30481ae4a527-kube-api-access-596tr\") pod \"nova-metadata-0\" (UID: \"598def1d-4972-40aa-a33b-30481ae4a527\") " pod="openstack/nova-metadata-0" Feb 23 10:27:47 crc kubenswrapper[4904]: I0223 10:27:47.148779 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/598def1d-4972-40aa-a33b-30481ae4a527-logs\") pod \"nova-metadata-0\" (UID: \"598def1d-4972-40aa-a33b-30481ae4a527\") " pod="openstack/nova-metadata-0" Feb 23 10:27:47 crc kubenswrapper[4904]: I0223 10:27:47.149308 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/598def1d-4972-40aa-a33b-30481ae4a527-logs\") pod \"nova-metadata-0\" (UID: \"598def1d-4972-40aa-a33b-30481ae4a527\") " pod="openstack/nova-metadata-0" Feb 23 10:27:47 crc kubenswrapper[4904]: I0223 10:27:47.154790 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598def1d-4972-40aa-a33b-30481ae4a527-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"598def1d-4972-40aa-a33b-30481ae4a527\") " pod="openstack/nova-metadata-0" Feb 23 10:27:47 crc kubenswrapper[4904]: I0223 10:27:47.154958 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/598def1d-4972-40aa-a33b-30481ae4a527-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"598def1d-4972-40aa-a33b-30481ae4a527\") " pod="openstack/nova-metadata-0" Feb 23 10:27:47 crc kubenswrapper[4904]: I0223 10:27:47.155345 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598def1d-4972-40aa-a33b-30481ae4a527-config-data\") pod \"nova-metadata-0\" (UID: \"598def1d-4972-40aa-a33b-30481ae4a527\") " pod="openstack/nova-metadata-0" Feb 23 10:27:47 crc kubenswrapper[4904]: I0223 10:27:47.165361 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-596tr\" (UniqueName: \"kubernetes.io/projected/598def1d-4972-40aa-a33b-30481ae4a527-kube-api-access-596tr\") pod \"nova-metadata-0\" (UID: \"598def1d-4972-40aa-a33b-30481ae4a527\") " pod="openstack/nova-metadata-0" Feb 23 10:27:47 crc kubenswrapper[4904]: I0223 10:27:47.281653 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d60d9bf2-023c-4157-b235-edd97257c125" path="/var/lib/kubelet/pods/d60d9bf2-023c-4157-b235-edd97257c125/volumes" Feb 23 10:27:47 crc kubenswrapper[4904]: I0223 10:27:47.283050 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8" path="/var/lib/kubelet/pods/e7936ce2-c31b-4a34-92c6-3e1ca7bde7a8/volumes" Feb 23 10:27:47 crc kubenswrapper[4904]: I0223 10:27:47.300739 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 10:27:47 crc kubenswrapper[4904]: I0223 10:27:47.590494 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1c8e85f3-888e-4a20-a6be-bed2b85f2b45","Type":"ContainerStarted","Data":"18d5c21ed127f37b857135f4a07bc8d5847203bd4311310d4b6d0551fd1306bd"} Feb 23 10:27:47 crc kubenswrapper[4904]: I0223 10:27:47.591151 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 23 10:27:47 crc kubenswrapper[4904]: I0223 10:27:47.591219 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1c8e85f3-888e-4a20-a6be-bed2b85f2b45","Type":"ContainerStarted","Data":"7ed2251a0fb41c47c2b46b2b873601b449044ca726e16fe341f35bb3742ea4b2"} Feb 23 10:27:47 crc kubenswrapper[4904]: I0223 10:27:47.613827 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.6136904960000003 podStartE2EDuration="2.613690496s" podCreationTimestamp="2026-02-23 10:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:27:47.610174626 +0000 UTC m=+1301.030548139" watchObservedRunningTime="2026-02-23 10:27:47.613690496 +0000 UTC m=+1301.034064009" Feb 23 10:27:47 crc kubenswrapper[4904]: I0223 10:27:47.858792 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 10:27:48 crc kubenswrapper[4904]: I0223 10:27:48.607755 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"598def1d-4972-40aa-a33b-30481ae4a527","Type":"ContainerStarted","Data":"408911eaec98dd04d2640d5104f00bcd3ec44331591bca1e58224a56258507db"} Feb 23 10:27:48 crc kubenswrapper[4904]: I0223 10:27:48.608094 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"598def1d-4972-40aa-a33b-30481ae4a527","Type":"ContainerStarted","Data":"5b3c1562a8c0be628d687fcd6af440e80b9641de85af31519f48864b8df3f045"} Feb 23 10:27:48 crc kubenswrapper[4904]: I0223 10:27:48.608108 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"598def1d-4972-40aa-a33b-30481ae4a527","Type":"ContainerStarted","Data":"33b258a8a3fb41ff06b32a791adc53b2917be3293f41d2a36eba863bb8556170"} Feb 23 10:27:48 crc kubenswrapper[4904]: E0223 10:27:48.895569 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3cb7ad408bbbe055c46a0ce689229dc52a792eed7dea11d25cc4d86e22a9aaaf" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 10:27:48 crc kubenswrapper[4904]: E0223 10:27:48.898901 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3cb7ad408bbbe055c46a0ce689229dc52a792eed7dea11d25cc4d86e22a9aaaf" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 10:27:48 crc kubenswrapper[4904]: E0223 10:27:48.900549 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3cb7ad408bbbe055c46a0ce689229dc52a792eed7dea11d25cc4d86e22a9aaaf" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 10:27:48 crc kubenswrapper[4904]: E0223 10:27:48.900588 4904 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d4415fc6-01e5-449e-9c42-3e7e37d226bb" containerName="nova-scheduler-scheduler" Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.643484 4904 generic.go:334] "Generic (PLEG): container finished" podID="802e4259-81ea-4d24-86b8-f607bb9ed3b7" containerID="f881512fa121667304f4aa20732910cee9f687bf22ce0f8457a7968f0192f2e2" exitCode=0 Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.644066 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"802e4259-81ea-4d24-86b8-f607bb9ed3b7","Type":"ContainerDied","Data":"f881512fa121667304f4aa20732910cee9f687bf22ce0f8457a7968f0192f2e2"} Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.644116 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"802e4259-81ea-4d24-86b8-f607bb9ed3b7","Type":"ContainerDied","Data":"29ca8cf808cd1c137588f12c539e48f72bfbd804f5a34609b611bbd93868568a"} Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.644137 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29ca8cf808cd1c137588f12c539e48f72bfbd804f5a34609b611bbd93868568a" Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.650318 4904 generic.go:334] "Generic (PLEG): container finished" podID="d4415fc6-01e5-449e-9c42-3e7e37d226bb" containerID="3cb7ad408bbbe055c46a0ce689229dc52a792eed7dea11d25cc4d86e22a9aaaf" exitCode=0 Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.650366 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d4415fc6-01e5-449e-9c42-3e7e37d226bb","Type":"ContainerDied","Data":"3cb7ad408bbbe055c46a0ce689229dc52a792eed7dea11d25cc4d86e22a9aaaf"} Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.713666 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.743921 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.743890313 podStartE2EDuration="4.743890313s" podCreationTimestamp="2026-02-23 10:27:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:27:48.643305763 +0000 UTC m=+1302.063679296" watchObservedRunningTime="2026-02-23 10:27:50.743890313 +0000 UTC m=+1304.164263846" Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.819663 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.860108 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/802e4259-81ea-4d24-86b8-f607bb9ed3b7-logs\") pod \"802e4259-81ea-4d24-86b8-f607bb9ed3b7\" (UID: \"802e4259-81ea-4d24-86b8-f607bb9ed3b7\") " Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.860223 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/802e4259-81ea-4d24-86b8-f607bb9ed3b7-config-data\") pod \"802e4259-81ea-4d24-86b8-f607bb9ed3b7\" (UID: \"802e4259-81ea-4d24-86b8-f607bb9ed3b7\") " Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.860427 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrmhk\" (UniqueName: \"kubernetes.io/projected/802e4259-81ea-4d24-86b8-f607bb9ed3b7-kube-api-access-zrmhk\") pod \"802e4259-81ea-4d24-86b8-f607bb9ed3b7\" (UID: \"802e4259-81ea-4d24-86b8-f607bb9ed3b7\") " Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.860563 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/802e4259-81ea-4d24-86b8-f607bb9ed3b7-combined-ca-bundle\") pod \"802e4259-81ea-4d24-86b8-f607bb9ed3b7\" (UID: \"802e4259-81ea-4d24-86b8-f607bb9ed3b7\") " Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.861142 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/802e4259-81ea-4d24-86b8-f607bb9ed3b7-logs" (OuterVolumeSpecName: "logs") pod "802e4259-81ea-4d24-86b8-f607bb9ed3b7" (UID: "802e4259-81ea-4d24-86b8-f607bb9ed3b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.862073 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/802e4259-81ea-4d24-86b8-f607bb9ed3b7-logs\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.884295 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/802e4259-81ea-4d24-86b8-f607bb9ed3b7-kube-api-access-zrmhk" (OuterVolumeSpecName: "kube-api-access-zrmhk") pod "802e4259-81ea-4d24-86b8-f607bb9ed3b7" (UID: "802e4259-81ea-4d24-86b8-f607bb9ed3b7"). InnerVolumeSpecName "kube-api-access-zrmhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.893412 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802e4259-81ea-4d24-86b8-f607bb9ed3b7-config-data" (OuterVolumeSpecName: "config-data") pod "802e4259-81ea-4d24-86b8-f607bb9ed3b7" (UID: "802e4259-81ea-4d24-86b8-f607bb9ed3b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.906781 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802e4259-81ea-4d24-86b8-f607bb9ed3b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "802e4259-81ea-4d24-86b8-f607bb9ed3b7" (UID: "802e4259-81ea-4d24-86b8-f607bb9ed3b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.963186 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4415fc6-01e5-449e-9c42-3e7e37d226bb-combined-ca-bundle\") pod \"d4415fc6-01e5-449e-9c42-3e7e37d226bb\" (UID: \"d4415fc6-01e5-449e-9c42-3e7e37d226bb\") " Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.963799 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9vd5\" (UniqueName: \"kubernetes.io/projected/d4415fc6-01e5-449e-9c42-3e7e37d226bb-kube-api-access-x9vd5\") pod \"d4415fc6-01e5-449e-9c42-3e7e37d226bb\" (UID: \"d4415fc6-01e5-449e-9c42-3e7e37d226bb\") " Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.963956 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4415fc6-01e5-449e-9c42-3e7e37d226bb-config-data\") pod \"d4415fc6-01e5-449e-9c42-3e7e37d226bb\" (UID: \"d4415fc6-01e5-449e-9c42-3e7e37d226bb\") " Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.964727 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/802e4259-81ea-4d24-86b8-f607bb9ed3b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.964809 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/802e4259-81ea-4d24-86b8-f607bb9ed3b7-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.964867 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrmhk\" (UniqueName: \"kubernetes.io/projected/802e4259-81ea-4d24-86b8-f607bb9ed3b7-kube-api-access-zrmhk\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.969002 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4415fc6-01e5-449e-9c42-3e7e37d226bb-kube-api-access-x9vd5" (OuterVolumeSpecName: "kube-api-access-x9vd5") pod "d4415fc6-01e5-449e-9c42-3e7e37d226bb" (UID: "d4415fc6-01e5-449e-9c42-3e7e37d226bb"). InnerVolumeSpecName "kube-api-access-x9vd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.992238 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4415fc6-01e5-449e-9c42-3e7e37d226bb-config-data" (OuterVolumeSpecName: "config-data") pod "d4415fc6-01e5-449e-9c42-3e7e37d226bb" (UID: "d4415fc6-01e5-449e-9c42-3e7e37d226bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:50 crc kubenswrapper[4904]: I0223 10:27:50.992765 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4415fc6-01e5-449e-9c42-3e7e37d226bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4415fc6-01e5-449e-9c42-3e7e37d226bb" (UID: "d4415fc6-01e5-449e-9c42-3e7e37d226bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.067142 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4415fc6-01e5-449e-9c42-3e7e37d226bb-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.067428 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4415fc6-01e5-449e-9c42-3e7e37d226bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.067548 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9vd5\" (UniqueName: \"kubernetes.io/projected/d4415fc6-01e5-449e-9c42-3e7e37d226bb-kube-api-access-x9vd5\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.663798 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d4415fc6-01e5-449e-9c42-3e7e37d226bb","Type":"ContainerDied","Data":"e902e93b52bfa0206791ce2ed7c308a9d54d89baf3bf61760c8f1c7763f3bd60"} Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.663827 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.663880 4904 scope.go:117] "RemoveContainer" containerID="3cb7ad408bbbe055c46a0ce689229dc52a792eed7dea11d25cc4d86e22a9aaaf" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.664955 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.696931 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.710605 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.723318 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.738358 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.762407 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 10:27:51 crc kubenswrapper[4904]: E0223 10:27:51.763005 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="802e4259-81ea-4d24-86b8-f607bb9ed3b7" containerName="nova-api-log" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.763033 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="802e4259-81ea-4d24-86b8-f607bb9ed3b7" containerName="nova-api-log" Feb 23 10:27:51 crc kubenswrapper[4904]: E0223 10:27:51.763061 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4415fc6-01e5-449e-9c42-3e7e37d226bb" containerName="nova-scheduler-scheduler" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.763073 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4415fc6-01e5-449e-9c42-3e7e37d226bb" containerName="nova-scheduler-scheduler" Feb 23 10:27:51 crc kubenswrapper[4904]: E0223 10:27:51.763135 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="802e4259-81ea-4d24-86b8-f607bb9ed3b7" containerName="nova-api-api" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.763143 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="802e4259-81ea-4d24-86b8-f607bb9ed3b7" containerName="nova-api-api" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.763489 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="802e4259-81ea-4d24-86b8-f607bb9ed3b7" containerName="nova-api-log" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.763527 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4415fc6-01e5-449e-9c42-3e7e37d226bb" containerName="nova-scheduler-scheduler" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.763548 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="802e4259-81ea-4d24-86b8-f607bb9ed3b7" containerName="nova-api-api" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.768040 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.770377 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.775065 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.788128 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.790151 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.793308 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.802597 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.886409 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v5rq\" (UniqueName: \"kubernetes.io/projected/139994b7-e887-4940-b942-08bdf6b39dc5-kube-api-access-4v5rq\") pod \"nova-scheduler-0\" (UID: \"139994b7-e887-4940-b942-08bdf6b39dc5\") " pod="openstack/nova-scheduler-0" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.886472 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfe9e77-e9fc-4e65-b187-7be1c729739b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5bfe9e77-e9fc-4e65-b187-7be1c729739b\") " pod="openstack/nova-api-0" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.886710 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/139994b7-e887-4940-b942-08bdf6b39dc5-config-data\") pod \"nova-scheduler-0\" (UID: \"139994b7-e887-4940-b942-08bdf6b39dc5\") " pod="openstack/nova-scheduler-0" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.886923 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bfe9e77-e9fc-4e65-b187-7be1c729739b-config-data\") pod \"nova-api-0\" (UID: \"5bfe9e77-e9fc-4e65-b187-7be1c729739b\") " pod="openstack/nova-api-0" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.887065 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bfe9e77-e9fc-4e65-b187-7be1c729739b-logs\") pod \"nova-api-0\" (UID: \"5bfe9e77-e9fc-4e65-b187-7be1c729739b\") " pod="openstack/nova-api-0" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.887160 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/139994b7-e887-4940-b942-08bdf6b39dc5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"139994b7-e887-4940-b942-08bdf6b39dc5\") " pod="openstack/nova-scheduler-0" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.887181 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6p8p\" (UniqueName: \"kubernetes.io/projected/5bfe9e77-e9fc-4e65-b187-7be1c729739b-kube-api-access-c6p8p\") pod \"nova-api-0\" (UID: \"5bfe9e77-e9fc-4e65-b187-7be1c729739b\") " pod="openstack/nova-api-0" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.989640 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v5rq\" (UniqueName: \"kubernetes.io/projected/139994b7-e887-4940-b942-08bdf6b39dc5-kube-api-access-4v5rq\") pod \"nova-scheduler-0\" (UID: \"139994b7-e887-4940-b942-08bdf6b39dc5\") " pod="openstack/nova-scheduler-0" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.989708 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfe9e77-e9fc-4e65-b187-7be1c729739b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5bfe9e77-e9fc-4e65-b187-7be1c729739b\") " pod="openstack/nova-api-0" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.989766 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/139994b7-e887-4940-b942-08bdf6b39dc5-config-data\") pod \"nova-scheduler-0\" (UID: \"139994b7-e887-4940-b942-08bdf6b39dc5\") " pod="openstack/nova-scheduler-0" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.989807 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bfe9e77-e9fc-4e65-b187-7be1c729739b-config-data\") pod \"nova-api-0\" (UID: \"5bfe9e77-e9fc-4e65-b187-7be1c729739b\") " pod="openstack/nova-api-0" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.989861 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bfe9e77-e9fc-4e65-b187-7be1c729739b-logs\") pod \"nova-api-0\" (UID: \"5bfe9e77-e9fc-4e65-b187-7be1c729739b\") " pod="openstack/nova-api-0" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.989909 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/139994b7-e887-4940-b942-08bdf6b39dc5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"139994b7-e887-4940-b942-08bdf6b39dc5\") " pod="openstack/nova-scheduler-0" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.989928 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6p8p\" (UniqueName: \"kubernetes.io/projected/5bfe9e77-e9fc-4e65-b187-7be1c729739b-kube-api-access-c6p8p\") pod \"nova-api-0\" (UID: \"5bfe9e77-e9fc-4e65-b187-7be1c729739b\") " pod="openstack/nova-api-0" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.990375 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bfe9e77-e9fc-4e65-b187-7be1c729739b-logs\") pod \"nova-api-0\" (UID: \"5bfe9e77-e9fc-4e65-b187-7be1c729739b\") " pod="openstack/nova-api-0" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.996782 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/139994b7-e887-4940-b942-08bdf6b39dc5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"139994b7-e887-4940-b942-08bdf6b39dc5\") " pod="openstack/nova-scheduler-0" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.997285 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/139994b7-e887-4940-b942-08bdf6b39dc5-config-data\") pod \"nova-scheduler-0\" (UID: \"139994b7-e887-4940-b942-08bdf6b39dc5\") " pod="openstack/nova-scheduler-0" Feb 23 10:27:51 crc kubenswrapper[4904]: I0223 10:27:51.998264 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bfe9e77-e9fc-4e65-b187-7be1c729739b-config-data\") pod \"nova-api-0\" (UID: \"5bfe9e77-e9fc-4e65-b187-7be1c729739b\") " pod="openstack/nova-api-0" Feb 23 10:27:52 crc kubenswrapper[4904]: I0223 10:27:52.008667 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfe9e77-e9fc-4e65-b187-7be1c729739b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5bfe9e77-e9fc-4e65-b187-7be1c729739b\") " pod="openstack/nova-api-0" Feb 23 10:27:52 crc kubenswrapper[4904]: I0223 10:27:52.010407 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v5rq\" (UniqueName: \"kubernetes.io/projected/139994b7-e887-4940-b942-08bdf6b39dc5-kube-api-access-4v5rq\") pod \"nova-scheduler-0\" (UID: \"139994b7-e887-4940-b942-08bdf6b39dc5\") " pod="openstack/nova-scheduler-0" Feb 23 10:27:52 crc kubenswrapper[4904]: I0223 10:27:52.011336 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6p8p\" (UniqueName: \"kubernetes.io/projected/5bfe9e77-e9fc-4e65-b187-7be1c729739b-kube-api-access-c6p8p\") pod \"nova-api-0\" (UID: \"5bfe9e77-e9fc-4e65-b187-7be1c729739b\") " pod="openstack/nova-api-0" Feb 23 10:27:52 crc kubenswrapper[4904]: I0223 10:27:52.090966 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 10:27:52 crc kubenswrapper[4904]: I0223 10:27:52.119055 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 10:27:52 crc kubenswrapper[4904]: I0223 10:27:52.300938 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 10:27:52 crc kubenswrapper[4904]: I0223 10:27:52.301092 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 10:27:52 crc kubenswrapper[4904]: W0223 10:27:52.603637 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod139994b7_e887_4940_b942_08bdf6b39dc5.slice/crio-1bd337df6cdbc922bda7cb6164437efd2fa9bad9c07bda7f1ea50d0107e0ba95 WatchSource:0}: Error finding container 1bd337df6cdbc922bda7cb6164437efd2fa9bad9c07bda7f1ea50d0107e0ba95: Status 404 returned error can't find the container with id 1bd337df6cdbc922bda7cb6164437efd2fa9bad9c07bda7f1ea50d0107e0ba95 Feb 23 10:27:52 crc kubenswrapper[4904]: I0223 10:27:52.617948 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 10:27:52 crc kubenswrapper[4904]: W0223 10:27:52.672317 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bfe9e77_e9fc_4e65_b187_7be1c729739b.slice/crio-2d49eb3ed2fb7623af90c89bd569c17031f4e9780f7e013e32b036afb0f13d8a WatchSource:0}: Error finding container 2d49eb3ed2fb7623af90c89bd569c17031f4e9780f7e013e32b036afb0f13d8a: Status 404 returned error can't find the container with id 2d49eb3ed2fb7623af90c89bd569c17031f4e9780f7e013e32b036afb0f13d8a Feb 23 10:27:52 crc kubenswrapper[4904]: I0223 10:27:52.674579 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 10:27:52 crc kubenswrapper[4904]: I0223 10:27:52.687288 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"139994b7-e887-4940-b942-08bdf6b39dc5","Type":"ContainerStarted","Data":"1bd337df6cdbc922bda7cb6164437efd2fa9bad9c07bda7f1ea50d0107e0ba95"} Feb 23 10:27:53 crc kubenswrapper[4904]: I0223 10:27:53.274819 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="802e4259-81ea-4d24-86b8-f607bb9ed3b7" path="/var/lib/kubelet/pods/802e4259-81ea-4d24-86b8-f607bb9ed3b7/volumes" Feb 23 10:27:53 crc kubenswrapper[4904]: I0223 10:27:53.276768 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4415fc6-01e5-449e-9c42-3e7e37d226bb" path="/var/lib/kubelet/pods/d4415fc6-01e5-449e-9c42-3e7e37d226bb/volumes" Feb 23 10:27:53 crc kubenswrapper[4904]: I0223 10:27:53.701961 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5bfe9e77-e9fc-4e65-b187-7be1c729739b","Type":"ContainerStarted","Data":"e4cc81dabc83dfd3db7cf64e62a58606055f9bf03c24981434671cafa8f4ab26"} Feb 23 10:27:53 crc kubenswrapper[4904]: I0223 10:27:53.702020 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5bfe9e77-e9fc-4e65-b187-7be1c729739b","Type":"ContainerStarted","Data":"389aa706248ea848f5c4a2dc05561beec015f9b73516e45c1044c44fe3ce6c95"} Feb 23 10:27:53 crc kubenswrapper[4904]: I0223 10:27:53.702037 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5bfe9e77-e9fc-4e65-b187-7be1c729739b","Type":"ContainerStarted","Data":"2d49eb3ed2fb7623af90c89bd569c17031f4e9780f7e013e32b036afb0f13d8a"} Feb 23 10:27:53 crc kubenswrapper[4904]: I0223 10:27:53.710539 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"139994b7-e887-4940-b942-08bdf6b39dc5","Type":"ContainerStarted","Data":"ab4735d63ad6cfd47f94069128d7d051f43bd7b12757f1546d3ccaa70381b5bc"} Feb 23 10:27:53 crc kubenswrapper[4904]: I0223 10:27:53.734687 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.734660189 podStartE2EDuration="2.734660189s" podCreationTimestamp="2026-02-23 10:27:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:27:53.728364159 +0000 UTC m=+1307.148737692" watchObservedRunningTime="2026-02-23 10:27:53.734660189 +0000 UTC m=+1307.155033702" Feb 23 10:27:53 crc kubenswrapper[4904]: I0223 10:27:53.758439 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.7584054350000002 podStartE2EDuration="2.758405435s" podCreationTimestamp="2026-02-23 10:27:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:27:53.748360109 +0000 UTC m=+1307.168733662" watchObservedRunningTime="2026-02-23 10:27:53.758405435 +0000 UTC m=+1307.178778958" Feb 23 10:27:54 crc kubenswrapper[4904]: I0223 10:27:54.480745 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 23 10:27:56 crc kubenswrapper[4904]: I0223 10:27:56.025068 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 23 10:27:57 crc kubenswrapper[4904]: I0223 10:27:57.091232 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 23 10:27:57 crc kubenswrapper[4904]: I0223 10:27:57.306077 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 23 10:27:57 crc kubenswrapper[4904]: I0223 10:27:57.306137 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 23 10:27:58 crc kubenswrapper[4904]: I0223 10:27:58.268612 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 10:27:58 crc kubenswrapper[4904]: I0223 10:27:58.270112 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="a76369f2-3ab0-43c6-b601-9c2c0d5636c9" containerName="kube-state-metrics" containerID="cri-o://704260c70f4076d1fec4f980f7f30afab803ee1bb524a3f9e9eac5ade1b470c1" gracePeriod=30 Feb 23 10:27:58 crc kubenswrapper[4904]: I0223 10:27:58.314881 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="598def1d-4972-40aa-a33b-30481ae4a527" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 10:27:58 crc kubenswrapper[4904]: I0223 10:27:58.315045 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="598def1d-4972-40aa-a33b-30481ae4a527" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 10:27:58 crc kubenswrapper[4904]: I0223 10:27:58.766665 4904 generic.go:334] "Generic (PLEG): container finished" podID="a76369f2-3ab0-43c6-b601-9c2c0d5636c9" containerID="704260c70f4076d1fec4f980f7f30afab803ee1bb524a3f9e9eac5ade1b470c1" exitCode=2 Feb 23 10:27:58 crc kubenswrapper[4904]: I0223 10:27:58.767047 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a76369f2-3ab0-43c6-b601-9c2c0d5636c9","Type":"ContainerDied","Data":"704260c70f4076d1fec4f980f7f30afab803ee1bb524a3f9e9eac5ade1b470c1"} Feb 23 10:27:58 crc kubenswrapper[4904]: I0223 10:27:58.767077 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a76369f2-3ab0-43c6-b601-9c2c0d5636c9","Type":"ContainerDied","Data":"8251335abfa1ee8d633543f70e56f10b50cfdaa670ce35ffeb5e2e688568cf6b"} Feb 23 10:27:58 crc kubenswrapper[4904]: I0223 10:27:58.767087 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8251335abfa1ee8d633543f70e56f10b50cfdaa670ce35ffeb5e2e688568cf6b" Feb 23 10:27:58 crc kubenswrapper[4904]: I0223 10:27:58.825975 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 10:27:58 crc kubenswrapper[4904]: I0223 10:27:58.969809 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnpjl\" (UniqueName: \"kubernetes.io/projected/a76369f2-3ab0-43c6-b601-9c2c0d5636c9-kube-api-access-mnpjl\") pod \"a76369f2-3ab0-43c6-b601-9c2c0d5636c9\" (UID: \"a76369f2-3ab0-43c6-b601-9c2c0d5636c9\") " Feb 23 10:27:58 crc kubenswrapper[4904]: I0223 10:27:58.978949 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a76369f2-3ab0-43c6-b601-9c2c0d5636c9-kube-api-access-mnpjl" (OuterVolumeSpecName: "kube-api-access-mnpjl") pod "a76369f2-3ab0-43c6-b601-9c2c0d5636c9" (UID: "a76369f2-3ab0-43c6-b601-9c2c0d5636c9"). InnerVolumeSpecName "kube-api-access-mnpjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:27:59 crc kubenswrapper[4904]: I0223 10:27:59.072963 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnpjl\" (UniqueName: \"kubernetes.io/projected/a76369f2-3ab0-43c6-b601-9c2c0d5636c9-kube-api-access-mnpjl\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:59 crc kubenswrapper[4904]: I0223 10:27:59.788013 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 10:27:59 crc kubenswrapper[4904]: I0223 10:27:59.814956 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 10:27:59 crc kubenswrapper[4904]: I0223 10:27:59.830566 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 10:27:59 crc kubenswrapper[4904]: I0223 10:27:59.841501 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 10:27:59 crc kubenswrapper[4904]: E0223 10:27:59.842046 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a76369f2-3ab0-43c6-b601-9c2c0d5636c9" containerName="kube-state-metrics" Feb 23 10:27:59 crc kubenswrapper[4904]: I0223 10:27:59.842073 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a76369f2-3ab0-43c6-b601-9c2c0d5636c9" containerName="kube-state-metrics" Feb 23 10:27:59 crc kubenswrapper[4904]: I0223 10:27:59.842340 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="a76369f2-3ab0-43c6-b601-9c2c0d5636c9" containerName="kube-state-metrics" Feb 23 10:27:59 crc kubenswrapper[4904]: I0223 10:27:59.843391 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 10:27:59 crc kubenswrapper[4904]: I0223 10:27:59.845673 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 23 10:27:59 crc kubenswrapper[4904]: I0223 10:27:59.851728 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 10:27:59 crc kubenswrapper[4904]: I0223 10:27:59.854186 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 23 10:27:59 crc kubenswrapper[4904]: I0223 10:27:59.998543 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kgqr\" (UniqueName: \"kubernetes.io/projected/50e2ff27-2573-4549-b31e-3fba348ec929-kube-api-access-7kgqr\") pod \"kube-state-metrics-0\" (UID: \"50e2ff27-2573-4549-b31e-3fba348ec929\") " pod="openstack/kube-state-metrics-0" Feb 23 10:27:59 crc kubenswrapper[4904]: I0223 10:27:59.998674 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/50e2ff27-2573-4549-b31e-3fba348ec929-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"50e2ff27-2573-4549-b31e-3fba348ec929\") " pod="openstack/kube-state-metrics-0" Feb 23 10:27:59 crc kubenswrapper[4904]: I0223 10:27:59.998809 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50e2ff27-2573-4549-b31e-3fba348ec929-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"50e2ff27-2573-4549-b31e-3fba348ec929\") " pod="openstack/kube-state-metrics-0" Feb 23 10:27:59 crc kubenswrapper[4904]: I0223 10:27:59.998890 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/50e2ff27-2573-4549-b31e-3fba348ec929-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"50e2ff27-2573-4549-b31e-3fba348ec929\") " pod="openstack/kube-state-metrics-0" Feb 23 10:28:00 crc kubenswrapper[4904]: I0223 10:28:00.102045 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/50e2ff27-2573-4549-b31e-3fba348ec929-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"50e2ff27-2573-4549-b31e-3fba348ec929\") " pod="openstack/kube-state-metrics-0" Feb 23 10:28:00 crc kubenswrapper[4904]: I0223 10:28:00.102378 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kgqr\" (UniqueName: \"kubernetes.io/projected/50e2ff27-2573-4549-b31e-3fba348ec929-kube-api-access-7kgqr\") pod \"kube-state-metrics-0\" (UID: \"50e2ff27-2573-4549-b31e-3fba348ec929\") " pod="openstack/kube-state-metrics-0" Feb 23 10:28:00 crc kubenswrapper[4904]: I0223 10:28:00.103192 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/50e2ff27-2573-4549-b31e-3fba348ec929-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"50e2ff27-2573-4549-b31e-3fba348ec929\") " pod="openstack/kube-state-metrics-0" Feb 23 10:28:00 crc kubenswrapper[4904]: I0223 10:28:00.104007 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50e2ff27-2573-4549-b31e-3fba348ec929-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"50e2ff27-2573-4549-b31e-3fba348ec929\") " pod="openstack/kube-state-metrics-0" Feb 23 10:28:00 crc kubenswrapper[4904]: I0223 10:28:00.108310 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/50e2ff27-2573-4549-b31e-3fba348ec929-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"50e2ff27-2573-4549-b31e-3fba348ec929\") " pod="openstack/kube-state-metrics-0" Feb 23 10:28:00 crc kubenswrapper[4904]: I0223 10:28:00.108685 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50e2ff27-2573-4549-b31e-3fba348ec929-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"50e2ff27-2573-4549-b31e-3fba348ec929\") " pod="openstack/kube-state-metrics-0" Feb 23 10:28:00 crc kubenswrapper[4904]: I0223 10:28:00.124625 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/50e2ff27-2573-4549-b31e-3fba348ec929-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"50e2ff27-2573-4549-b31e-3fba348ec929\") " pod="openstack/kube-state-metrics-0" Feb 23 10:28:00 crc kubenswrapper[4904]: I0223 10:28:00.140987 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kgqr\" (UniqueName: \"kubernetes.io/projected/50e2ff27-2573-4549-b31e-3fba348ec929-kube-api-access-7kgqr\") pod \"kube-state-metrics-0\" (UID: \"50e2ff27-2573-4549-b31e-3fba348ec929\") " pod="openstack/kube-state-metrics-0" Feb 23 10:28:00 crc kubenswrapper[4904]: I0223 10:28:00.161242 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 10:28:00 crc kubenswrapper[4904]: I0223 10:28:00.598667 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:28:00 crc kubenswrapper[4904]: I0223 10:28:00.600071 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" containerName="proxy-httpd" containerID="cri-o://b428acc2154b376f24adab7a67f245f9805661e053a52c33db3e5e5dbe656b9c" gracePeriod=30 Feb 23 10:28:00 crc kubenswrapper[4904]: I0223 10:28:00.600322 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" containerName="sg-core" containerID="cri-o://ef77e0f0612dfb96f815496ad96bb375ea0085d1f7889fd5eda6b600237fd300" gracePeriod=30 Feb 23 10:28:00 crc kubenswrapper[4904]: I0223 10:28:00.600432 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" containerName="ceilometer-notification-agent" containerID="cri-o://4c9547b95d66000bb5d97a7132258c6fff1c569f0d36e87e78405d8c5e2c263d" gracePeriod=30 Feb 23 10:28:00 crc kubenswrapper[4904]: I0223 10:28:00.599460 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" containerName="ceilometer-central-agent" containerID="cri-o://71f6d4c55d3baf1289b82951afde783697d7648be6db2ccf9ec589845f75e35a" gracePeriod=30 Feb 23 10:28:00 crc kubenswrapper[4904]: I0223 10:28:00.713823 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 10:28:00 crc kubenswrapper[4904]: I0223 10:28:00.799154 4904 generic.go:334] "Generic (PLEG): container finished" podID="cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" containerID="b428acc2154b376f24adab7a67f245f9805661e053a52c33db3e5e5dbe656b9c" exitCode=0 Feb 23 10:28:00 crc kubenswrapper[4904]: I0223 10:28:00.799191 4904 generic.go:334] "Generic (PLEG): container finished" podID="cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" containerID="ef77e0f0612dfb96f815496ad96bb375ea0085d1f7889fd5eda6b600237fd300" exitCode=2 Feb 23 10:28:00 crc kubenswrapper[4904]: I0223 10:28:00.799227 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1","Type":"ContainerDied","Data":"b428acc2154b376f24adab7a67f245f9805661e053a52c33db3e5e5dbe656b9c"} Feb 23 10:28:00 crc kubenswrapper[4904]: I0223 10:28:00.799256 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1","Type":"ContainerDied","Data":"ef77e0f0612dfb96f815496ad96bb375ea0085d1f7889fd5eda6b600237fd300"} Feb 23 10:28:00 crc kubenswrapper[4904]: I0223 10:28:00.800309 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"50e2ff27-2573-4549-b31e-3fba348ec929","Type":"ContainerStarted","Data":"56a6869bbc9bca08c471474f1cb1be7884a4198f800f8b511c3feaf0fbb47f19"} Feb 23 10:28:01 crc kubenswrapper[4904]: E0223 10:28:01.197469 4904 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbf26cb5_2c3e_4742_9595_ae7e8dad8af1.slice/crio-71f6d4c55d3baf1289b82951afde783697d7648be6db2ccf9ec589845f75e35a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbf26cb5_2c3e_4742_9595_ae7e8dad8af1.slice/crio-conmon-71f6d4c55d3baf1289b82951afde783697d7648be6db2ccf9ec589845f75e35a.scope\": RecentStats: unable to find data in memory cache]" Feb 23 10:28:01 crc kubenswrapper[4904]: I0223 10:28:01.268448 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a76369f2-3ab0-43c6-b601-9c2c0d5636c9" path="/var/lib/kubelet/pods/a76369f2-3ab0-43c6-b601-9c2c0d5636c9/volumes" Feb 23 10:28:01 crc kubenswrapper[4904]: I0223 10:28:01.820014 4904 generic.go:334] "Generic (PLEG): container finished" podID="cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" containerID="71f6d4c55d3baf1289b82951afde783697d7648be6db2ccf9ec589845f75e35a" exitCode=0 Feb 23 10:28:01 crc kubenswrapper[4904]: I0223 10:28:01.820104 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1","Type":"ContainerDied","Data":"71f6d4c55d3baf1289b82951afde783697d7648be6db2ccf9ec589845f75e35a"} Feb 23 10:28:01 crc kubenswrapper[4904]: I0223 10:28:01.822548 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"50e2ff27-2573-4549-b31e-3fba348ec929","Type":"ContainerStarted","Data":"6ff8049325914e76a3f99576bebd947137c363f636631300d0b6f1bbeeacad66"} Feb 23 10:28:01 crc kubenswrapper[4904]: I0223 10:28:01.848629 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.43722355 podStartE2EDuration="2.848607467s" podCreationTimestamp="2026-02-23 10:27:59 +0000 UTC" firstStartedPulling="2026-02-23 10:28:00.72838706 +0000 UTC m=+1314.148760573" lastFinishedPulling="2026-02-23 10:28:01.139770977 +0000 UTC m=+1314.560144490" observedRunningTime="2026-02-23 10:28:01.842429361 +0000 UTC m=+1315.262802924" watchObservedRunningTime="2026-02-23 10:28:01.848607467 +0000 UTC m=+1315.268980980" Feb 23 10:28:02 crc kubenswrapper[4904]: I0223 10:28:02.092089 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 23 10:28:02 crc kubenswrapper[4904]: I0223 10:28:02.121364 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 10:28:02 crc kubenswrapper[4904]: I0223 10:28:02.121414 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 10:28:02 crc kubenswrapper[4904]: I0223 10:28:02.205150 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 23 10:28:02 crc kubenswrapper[4904]: I0223 10:28:02.832927 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 23 10:28:02 crc kubenswrapper[4904]: I0223 10:28:02.871741 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.224967 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5bfe9e77-e9fc-4e65-b187-7be1c729739b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.217:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.224981 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5bfe9e77-e9fc-4e65-b187-7be1c729739b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.217:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.560791 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="a76369f2-3ab0-43c6-b601-9c2c0d5636c9" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.669219 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.771986 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-sg-core-conf-yaml\") pod \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.772175 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-run-httpd\") pod \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.772238 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhtrt\" (UniqueName: \"kubernetes.io/projected/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-kube-api-access-dhtrt\") pod \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.772337 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-log-httpd\") pod \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.772434 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-scripts\") pod \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.772506 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-combined-ca-bundle\") pod \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.772594 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-config-data\") pod \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\" (UID: \"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1\") " Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.776337 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" (UID: "cbf26cb5-2c3e-4742-9595-ae7e8dad8af1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.777626 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" (UID: "cbf26cb5-2c3e-4742-9595-ae7e8dad8af1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.784817 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-kube-api-access-dhtrt" (OuterVolumeSpecName: "kube-api-access-dhtrt") pod "cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" (UID: "cbf26cb5-2c3e-4742-9595-ae7e8dad8af1"). InnerVolumeSpecName "kube-api-access-dhtrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.794207 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-scripts" (OuterVolumeSpecName: "scripts") pod "cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" (UID: "cbf26cb5-2c3e-4742-9595-ae7e8dad8af1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.857438 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" (UID: "cbf26cb5-2c3e-4742-9595-ae7e8dad8af1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.883070 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhtrt\" (UniqueName: \"kubernetes.io/projected/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-kube-api-access-dhtrt\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.883138 4904 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.883152 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.883166 4904 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.883178 4904 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.925234 4904 generic.go:334] "Generic (PLEG): container finished" podID="cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" containerID="4c9547b95d66000bb5d97a7132258c6fff1c569f0d36e87e78405d8c5e2c263d" exitCode=0 Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.926003 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.926042 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1","Type":"ContainerDied","Data":"4c9547b95d66000bb5d97a7132258c6fff1c569f0d36e87e78405d8c5e2c263d"} Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.927458 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf26cb5-2c3e-4742-9595-ae7e8dad8af1","Type":"ContainerDied","Data":"1496dda5dccdb5cb66cfce6902d462912d8d71a7da89ea86a17e61aef658300a"} Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.927651 4904 scope.go:117] "RemoveContainer" containerID="b428acc2154b376f24adab7a67f245f9805661e053a52c33db3e5e5dbe656b9c" Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.950973 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" (UID: "cbf26cb5-2c3e-4742-9595-ae7e8dad8af1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.975023 4904 scope.go:117] "RemoveContainer" containerID="ef77e0f0612dfb96f815496ad96bb375ea0085d1f7889fd5eda6b600237fd300" Feb 23 10:28:03 crc kubenswrapper[4904]: I0223 10:28:03.985379 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.051397 4904 scope.go:117] "RemoveContainer" containerID="4c9547b95d66000bb5d97a7132258c6fff1c569f0d36e87e78405d8c5e2c263d" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.076868 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-config-data" (OuterVolumeSpecName: "config-data") pod "cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" (UID: "cbf26cb5-2c3e-4742-9595-ae7e8dad8af1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.087746 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.100461 4904 scope.go:117] "RemoveContainer" containerID="71f6d4c55d3baf1289b82951afde783697d7648be6db2ccf9ec589845f75e35a" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.131014 4904 scope.go:117] "RemoveContainer" containerID="b428acc2154b376f24adab7a67f245f9805661e053a52c33db3e5e5dbe656b9c" Feb 23 10:28:04 crc kubenswrapper[4904]: E0223 10:28:04.131636 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b428acc2154b376f24adab7a67f245f9805661e053a52c33db3e5e5dbe656b9c\": container with ID starting with b428acc2154b376f24adab7a67f245f9805661e053a52c33db3e5e5dbe656b9c not found: ID does not exist" containerID="b428acc2154b376f24adab7a67f245f9805661e053a52c33db3e5e5dbe656b9c" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.131696 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b428acc2154b376f24adab7a67f245f9805661e053a52c33db3e5e5dbe656b9c"} err="failed to get container status \"b428acc2154b376f24adab7a67f245f9805661e053a52c33db3e5e5dbe656b9c\": rpc error: code = NotFound desc = could not find container \"b428acc2154b376f24adab7a67f245f9805661e053a52c33db3e5e5dbe656b9c\": container with ID starting with b428acc2154b376f24adab7a67f245f9805661e053a52c33db3e5e5dbe656b9c not found: ID does not exist" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.131754 4904 scope.go:117] "RemoveContainer" containerID="ef77e0f0612dfb96f815496ad96bb375ea0085d1f7889fd5eda6b600237fd300" Feb 23 10:28:04 crc kubenswrapper[4904]: E0223 10:28:04.134135 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef77e0f0612dfb96f815496ad96bb375ea0085d1f7889fd5eda6b600237fd300\": container with ID starting with ef77e0f0612dfb96f815496ad96bb375ea0085d1f7889fd5eda6b600237fd300 not found: ID does not exist" containerID="ef77e0f0612dfb96f815496ad96bb375ea0085d1f7889fd5eda6b600237fd300" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.134191 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef77e0f0612dfb96f815496ad96bb375ea0085d1f7889fd5eda6b600237fd300"} err="failed to get container status \"ef77e0f0612dfb96f815496ad96bb375ea0085d1f7889fd5eda6b600237fd300\": rpc error: code = NotFound desc = could not find container \"ef77e0f0612dfb96f815496ad96bb375ea0085d1f7889fd5eda6b600237fd300\": container with ID starting with ef77e0f0612dfb96f815496ad96bb375ea0085d1f7889fd5eda6b600237fd300 not found: ID does not exist" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.134227 4904 scope.go:117] "RemoveContainer" containerID="4c9547b95d66000bb5d97a7132258c6fff1c569f0d36e87e78405d8c5e2c263d" Feb 23 10:28:04 crc kubenswrapper[4904]: E0223 10:28:04.135380 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c9547b95d66000bb5d97a7132258c6fff1c569f0d36e87e78405d8c5e2c263d\": container with ID starting with 4c9547b95d66000bb5d97a7132258c6fff1c569f0d36e87e78405d8c5e2c263d not found: ID does not exist" containerID="4c9547b95d66000bb5d97a7132258c6fff1c569f0d36e87e78405d8c5e2c263d" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.135420 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c9547b95d66000bb5d97a7132258c6fff1c569f0d36e87e78405d8c5e2c263d"} err="failed to get container status \"4c9547b95d66000bb5d97a7132258c6fff1c569f0d36e87e78405d8c5e2c263d\": rpc error: code = NotFound desc = could not find container \"4c9547b95d66000bb5d97a7132258c6fff1c569f0d36e87e78405d8c5e2c263d\": container with ID starting with 4c9547b95d66000bb5d97a7132258c6fff1c569f0d36e87e78405d8c5e2c263d not found: ID does not exist" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.135442 4904 scope.go:117] "RemoveContainer" containerID="71f6d4c55d3baf1289b82951afde783697d7648be6db2ccf9ec589845f75e35a" Feb 23 10:28:04 crc kubenswrapper[4904]: E0223 10:28:04.135822 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71f6d4c55d3baf1289b82951afde783697d7648be6db2ccf9ec589845f75e35a\": container with ID starting with 71f6d4c55d3baf1289b82951afde783697d7648be6db2ccf9ec589845f75e35a not found: ID does not exist" containerID="71f6d4c55d3baf1289b82951afde783697d7648be6db2ccf9ec589845f75e35a" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.135869 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71f6d4c55d3baf1289b82951afde783697d7648be6db2ccf9ec589845f75e35a"} err="failed to get container status \"71f6d4c55d3baf1289b82951afde783697d7648be6db2ccf9ec589845f75e35a\": rpc error: code = NotFound desc = could not find container \"71f6d4c55d3baf1289b82951afde783697d7648be6db2ccf9ec589845f75e35a\": container with ID starting with 71f6d4c55d3baf1289b82951afde783697d7648be6db2ccf9ec589845f75e35a not found: ID does not exist" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.316550 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.329951 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.343288 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:28:04 crc kubenswrapper[4904]: E0223 10:28:04.344063 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" containerName="ceilometer-notification-agent" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.344139 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" containerName="ceilometer-notification-agent" Feb 23 10:28:04 crc kubenswrapper[4904]: E0223 10:28:04.344230 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" containerName="ceilometer-central-agent" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.344291 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" containerName="ceilometer-central-agent" Feb 23 10:28:04 crc kubenswrapper[4904]: E0223 10:28:04.344374 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" containerName="sg-core" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.345532 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" containerName="sg-core" Feb 23 10:28:04 crc kubenswrapper[4904]: E0223 10:28:04.345640 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" containerName="proxy-httpd" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.345695 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" containerName="proxy-httpd" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.345963 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" containerName="ceilometer-central-agent" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.346043 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" containerName="sg-core" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.346107 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" containerName="ceilometer-notification-agent" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.346165 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" containerName="proxy-httpd" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.348624 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.354414 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.354643 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.354829 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.377924 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.396460 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.396527 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-config-data\") pod \"ceilometer-0\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.396556 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.396593 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c300ac9-b7ea-4f33-8216-eac39c5497a4-log-httpd\") pod \"ceilometer-0\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.396636 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-scripts\") pod \"ceilometer-0\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.396694 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.396770 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2dr7\" (UniqueName: \"kubernetes.io/projected/1c300ac9-b7ea-4f33-8216-eac39c5497a4-kube-api-access-f2dr7\") pod \"ceilometer-0\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.396802 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c300ac9-b7ea-4f33-8216-eac39c5497a4-run-httpd\") pod \"ceilometer-0\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.499850 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.499914 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-config-data\") pod \"ceilometer-0\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.499940 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.499988 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c300ac9-b7ea-4f33-8216-eac39c5497a4-log-httpd\") pod \"ceilometer-0\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.500025 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-scripts\") pod \"ceilometer-0\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.500074 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.500133 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2dr7\" (UniqueName: \"kubernetes.io/projected/1c300ac9-b7ea-4f33-8216-eac39c5497a4-kube-api-access-f2dr7\") pod \"ceilometer-0\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.500160 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c300ac9-b7ea-4f33-8216-eac39c5497a4-run-httpd\") pod \"ceilometer-0\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.501204 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c300ac9-b7ea-4f33-8216-eac39c5497a4-run-httpd\") pod \"ceilometer-0\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.501389 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c300ac9-b7ea-4f33-8216-eac39c5497a4-log-httpd\") pod \"ceilometer-0\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.506202 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.506706 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-scripts\") pod \"ceilometer-0\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.507565 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.510424 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-config-data\") pod \"ceilometer-0\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.513442 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.531830 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2dr7\" (UniqueName: \"kubernetes.io/projected/1c300ac9-b7ea-4f33-8216-eac39c5497a4-kube-api-access-f2dr7\") pod \"ceilometer-0\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " pod="openstack/ceilometer-0" Feb 23 10:28:04 crc kubenswrapper[4904]: I0223 10:28:04.672756 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:28:05 crc kubenswrapper[4904]: W0223 10:28:05.199153 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c300ac9_b7ea_4f33_8216_eac39c5497a4.slice/crio-73aebdce89d087f489e2032becbad82e867e5bf6998e66f3e4094de2d6192ac7 WatchSource:0}: Error finding container 73aebdce89d087f489e2032becbad82e867e5bf6998e66f3e4094de2d6192ac7: Status 404 returned error can't find the container with id 73aebdce89d087f489e2032becbad82e867e5bf6998e66f3e4094de2d6192ac7 Feb 23 10:28:05 crc kubenswrapper[4904]: I0223 10:28:05.200155 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:28:05 crc kubenswrapper[4904]: I0223 10:28:05.278181 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbf26cb5-2c3e-4742-9595-ae7e8dad8af1" path="/var/lib/kubelet/pods/cbf26cb5-2c3e-4742-9595-ae7e8dad8af1/volumes" Feb 23 10:28:05 crc kubenswrapper[4904]: I0223 10:28:05.961539 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c300ac9-b7ea-4f33-8216-eac39c5497a4","Type":"ContainerStarted","Data":"c5a7084c28063e51e8b27130572eb10f8fb6614c5f5b7c5436cf9e4a1317931a"} Feb 23 10:28:05 crc kubenswrapper[4904]: I0223 10:28:05.961891 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c300ac9-b7ea-4f33-8216-eac39c5497a4","Type":"ContainerStarted","Data":"73aebdce89d087f489e2032becbad82e867e5bf6998e66f3e4094de2d6192ac7"} Feb 23 10:28:06 crc kubenswrapper[4904]: I0223 10:28:06.977608 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c300ac9-b7ea-4f33-8216-eac39c5497a4","Type":"ContainerStarted","Data":"6c80192ec27fed4edb18e29a9de87ad4e3c0c0e2c7c12db4015b8b82a0668cb8"} Feb 23 10:28:07 crc kubenswrapper[4904]: I0223 10:28:07.331042 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 23 10:28:07 crc kubenswrapper[4904]: I0223 10:28:07.333553 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 23 10:28:07 crc kubenswrapper[4904]: I0223 10:28:07.345276 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 23 10:28:07 crc kubenswrapper[4904]: I0223 10:28:07.992575 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c300ac9-b7ea-4f33-8216-eac39c5497a4","Type":"ContainerStarted","Data":"2e8d558a650745226bcd9e90b18aec0070b949fdbca68085f3e8ec64d1b9e4f7"} Feb 23 10:28:07 crc kubenswrapper[4904]: I0223 10:28:07.997445 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:09.844266 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.026456 4904 generic.go:334] "Generic (PLEG): container finished" podID="ffd478fe-8ee9-4a73-bfb8-d817c50124f1" containerID="c58af4c87621f6d5d41f901cb8735a27915abff375cef7667d7b146451d5ecdc" exitCode=137 Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.026521 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.026542 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ffd478fe-8ee9-4a73-bfb8-d817c50124f1","Type":"ContainerDied","Data":"c58af4c87621f6d5d41f901cb8735a27915abff375cef7667d7b146451d5ecdc"} Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.026589 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ffd478fe-8ee9-4a73-bfb8-d817c50124f1","Type":"ContainerDied","Data":"981c123598e57e03400a1d1dbbf0b32fd0711711c5d58c7c414996f94c8132ca"} Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.026615 4904 scope.go:117] "RemoveContainer" containerID="c58af4c87621f6d5d41f901cb8735a27915abff375cef7667d7b146451d5ecdc" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.037728 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c300ac9-b7ea-4f33-8216-eac39c5497a4","Type":"ContainerStarted","Data":"a2583273f45249b1268860292d45966fc83514a0455615c9653dd6f82cf0efc9"} Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.038565 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4hkh\" (UniqueName: \"kubernetes.io/projected/ffd478fe-8ee9-4a73-bfb8-d817c50124f1-kube-api-access-c4hkh\") pod \"ffd478fe-8ee9-4a73-bfb8-d817c50124f1\" (UID: \"ffd478fe-8ee9-4a73-bfb8-d817c50124f1\") " Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.038799 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffd478fe-8ee9-4a73-bfb8-d817c50124f1-combined-ca-bundle\") pod \"ffd478fe-8ee9-4a73-bfb8-d817c50124f1\" (UID: \"ffd478fe-8ee9-4a73-bfb8-d817c50124f1\") " Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.039698 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffd478fe-8ee9-4a73-bfb8-d817c50124f1-config-data\") pod \"ffd478fe-8ee9-4a73-bfb8-d817c50124f1\" (UID: \"ffd478fe-8ee9-4a73-bfb8-d817c50124f1\") " Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.055250 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffd478fe-8ee9-4a73-bfb8-d817c50124f1-kube-api-access-c4hkh" (OuterVolumeSpecName: "kube-api-access-c4hkh") pod "ffd478fe-8ee9-4a73-bfb8-d817c50124f1" (UID: "ffd478fe-8ee9-4a73-bfb8-d817c50124f1"). InnerVolumeSpecName "kube-api-access-c4hkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.107914 4904 scope.go:117] "RemoveContainer" containerID="c58af4c87621f6d5d41f901cb8735a27915abff375cef7667d7b146451d5ecdc" Feb 23 10:28:10 crc kubenswrapper[4904]: E0223 10:28:10.110368 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c58af4c87621f6d5d41f901cb8735a27915abff375cef7667d7b146451d5ecdc\": container with ID starting with c58af4c87621f6d5d41f901cb8735a27915abff375cef7667d7b146451d5ecdc not found: ID does not exist" containerID="c58af4c87621f6d5d41f901cb8735a27915abff375cef7667d7b146451d5ecdc" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.110435 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c58af4c87621f6d5d41f901cb8735a27915abff375cef7667d7b146451d5ecdc"} err="failed to get container status \"c58af4c87621f6d5d41f901cb8735a27915abff375cef7667d7b146451d5ecdc\": rpc error: code = NotFound desc = could not find container \"c58af4c87621f6d5d41f901cb8735a27915abff375cef7667d7b146451d5ecdc\": container with ID starting with c58af4c87621f6d5d41f901cb8735a27915abff375cef7667d7b146451d5ecdc not found: ID does not exist" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.130070 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffd478fe-8ee9-4a73-bfb8-d817c50124f1-config-data" (OuterVolumeSpecName: "config-data") pod "ffd478fe-8ee9-4a73-bfb8-d817c50124f1" (UID: "ffd478fe-8ee9-4a73-bfb8-d817c50124f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.130606 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.102139319 podStartE2EDuration="6.130585151s" podCreationTimestamp="2026-02-23 10:28:04 +0000 UTC" firstStartedPulling="2026-02-23 10:28:05.204886863 +0000 UTC m=+1318.625260376" lastFinishedPulling="2026-02-23 10:28:09.233332695 +0000 UTC m=+1322.653706208" observedRunningTime="2026-02-23 10:28:10.108978586 +0000 UTC m=+1323.529352099" watchObservedRunningTime="2026-02-23 10:28:10.130585151 +0000 UTC m=+1323.550958674" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.134842 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffd478fe-8ee9-4a73-bfb8-d817c50124f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffd478fe-8ee9-4a73-bfb8-d817c50124f1" (UID: "ffd478fe-8ee9-4a73-bfb8-d817c50124f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.150395 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4hkh\" (UniqueName: \"kubernetes.io/projected/ffd478fe-8ee9-4a73-bfb8-d817c50124f1-kube-api-access-c4hkh\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.150426 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffd478fe-8ee9-4a73-bfb8-d817c50124f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.150436 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffd478fe-8ee9-4a73-bfb8-d817c50124f1-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.339963 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.459825 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.488057 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.519410 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 10:28:10 crc kubenswrapper[4904]: E0223 10:28:10.520445 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffd478fe-8ee9-4a73-bfb8-d817c50124f1" containerName="nova-cell1-novncproxy-novncproxy" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.520552 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffd478fe-8ee9-4a73-bfb8-d817c50124f1" containerName="nova-cell1-novncproxy-novncproxy" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.521671 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffd478fe-8ee9-4a73-bfb8-d817c50124f1" containerName="nova-cell1-novncproxy-novncproxy" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.525914 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.530225 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.530604 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.531983 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.536639 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.693274 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3a2b238-4696-42c9-b713-365220c2ce44-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a2b238-4696-42c9-b713-365220c2ce44\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.693344 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3a2b238-4696-42c9-b713-365220c2ce44-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a2b238-4696-42c9-b713-365220c2ce44\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.693453 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjpps\" (UniqueName: \"kubernetes.io/projected/d3a2b238-4696-42c9-b713-365220c2ce44-kube-api-access-zjpps\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a2b238-4696-42c9-b713-365220c2ce44\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.693564 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3a2b238-4696-42c9-b713-365220c2ce44-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a2b238-4696-42c9-b713-365220c2ce44\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.693591 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3a2b238-4696-42c9-b713-365220c2ce44-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a2b238-4696-42c9-b713-365220c2ce44\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.796141 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3a2b238-4696-42c9-b713-365220c2ce44-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a2b238-4696-42c9-b713-365220c2ce44\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.796194 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3a2b238-4696-42c9-b713-365220c2ce44-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a2b238-4696-42c9-b713-365220c2ce44\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.796261 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjpps\" (UniqueName: \"kubernetes.io/projected/d3a2b238-4696-42c9-b713-365220c2ce44-kube-api-access-zjpps\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a2b238-4696-42c9-b713-365220c2ce44\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.796336 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3a2b238-4696-42c9-b713-365220c2ce44-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a2b238-4696-42c9-b713-365220c2ce44\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.796362 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3a2b238-4696-42c9-b713-365220c2ce44-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a2b238-4696-42c9-b713-365220c2ce44\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.810398 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3a2b238-4696-42c9-b713-365220c2ce44-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a2b238-4696-42c9-b713-365220c2ce44\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.811480 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3a2b238-4696-42c9-b713-365220c2ce44-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a2b238-4696-42c9-b713-365220c2ce44\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.814466 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3a2b238-4696-42c9-b713-365220c2ce44-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a2b238-4696-42c9-b713-365220c2ce44\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.822289 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjpps\" (UniqueName: \"kubernetes.io/projected/d3a2b238-4696-42c9-b713-365220c2ce44-kube-api-access-zjpps\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a2b238-4696-42c9-b713-365220c2ce44\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.822815 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3a2b238-4696-42c9-b713-365220c2ce44-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a2b238-4696-42c9-b713-365220c2ce44\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:28:10 crc kubenswrapper[4904]: I0223 10:28:10.865589 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:28:11 crc kubenswrapper[4904]: I0223 10:28:11.051935 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 10:28:11 crc kubenswrapper[4904]: I0223 10:28:11.269221 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffd478fe-8ee9-4a73-bfb8-d817c50124f1" path="/var/lib/kubelet/pods/ffd478fe-8ee9-4a73-bfb8-d817c50124f1/volumes" Feb 23 10:28:11 crc kubenswrapper[4904]: W0223 10:28:11.379994 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3a2b238_4696_42c9_b713_365220c2ce44.slice/crio-7bad40963ae30d942fa708fd00b5cde19d2ae345e615d972c5f0229f7ccc2f06 WatchSource:0}: Error finding container 7bad40963ae30d942fa708fd00b5cde19d2ae345e615d972c5f0229f7ccc2f06: Status 404 returned error can't find the container with id 7bad40963ae30d942fa708fd00b5cde19d2ae345e615d972c5f0229f7ccc2f06 Feb 23 10:28:11 crc kubenswrapper[4904]: I0223 10:28:11.382707 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 10:28:12 crc kubenswrapper[4904]: I0223 10:28:12.062518 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d3a2b238-4696-42c9-b713-365220c2ce44","Type":"ContainerStarted","Data":"c1a5abb415f6073f541ea12f263e71f7b135ce10e18d6105ca47fc5312afc627"} Feb 23 10:28:12 crc kubenswrapper[4904]: I0223 10:28:12.062901 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d3a2b238-4696-42c9-b713-365220c2ce44","Type":"ContainerStarted","Data":"7bad40963ae30d942fa708fd00b5cde19d2ae345e615d972c5f0229f7ccc2f06"} Feb 23 10:28:12 crc kubenswrapper[4904]: I0223 10:28:12.084960 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.084936536 podStartE2EDuration="2.084936536s" podCreationTimestamp="2026-02-23 10:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:28:12.083826855 +0000 UTC m=+1325.504200378" watchObservedRunningTime="2026-02-23 10:28:12.084936536 +0000 UTC m=+1325.505310049" Feb 23 10:28:12 crc kubenswrapper[4904]: I0223 10:28:12.125090 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 23 10:28:12 crc kubenswrapper[4904]: I0223 10:28:12.127254 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 23 10:28:12 crc kubenswrapper[4904]: I0223 10:28:12.134332 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 23 10:28:12 crc kubenswrapper[4904]: I0223 10:28:12.138842 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 23 10:28:13 crc kubenswrapper[4904]: I0223 10:28:13.075609 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 23 10:28:13 crc kubenswrapper[4904]: I0223 10:28:13.083746 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 23 10:28:13 crc kubenswrapper[4904]: I0223 10:28:13.371668 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-m8pl8"] Feb 23 10:28:13 crc kubenswrapper[4904]: I0223 10:28:13.373984 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" Feb 23 10:28:13 crc kubenswrapper[4904]: I0223 10:28:13.403368 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-m8pl8"] Feb 23 10:28:13 crc kubenswrapper[4904]: I0223 10:28:13.403390 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f9td\" (UniqueName: \"kubernetes.io/projected/47101850-5ecb-4158-8a6e-2c4541850b48-kube-api-access-5f9td\") pod \"dnsmasq-dns-89c5cd4d5-m8pl8\" (UID: \"47101850-5ecb-4158-8a6e-2c4541850b48\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" Feb 23 10:28:13 crc kubenswrapper[4904]: I0223 10:28:13.403815 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-config\") pod \"dnsmasq-dns-89c5cd4d5-m8pl8\" (UID: \"47101850-5ecb-4158-8a6e-2c4541850b48\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" Feb 23 10:28:13 crc kubenswrapper[4904]: I0223 10:28:13.403931 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-m8pl8\" (UID: \"47101850-5ecb-4158-8a6e-2c4541850b48\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" Feb 23 10:28:13 crc kubenswrapper[4904]: I0223 10:28:13.404124 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-m8pl8\" (UID: \"47101850-5ecb-4158-8a6e-2c4541850b48\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" Feb 23 10:28:13 crc kubenswrapper[4904]: I0223 10:28:13.404407 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-m8pl8\" (UID: \"47101850-5ecb-4158-8a6e-2c4541850b48\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" Feb 23 10:28:13 crc kubenswrapper[4904]: I0223 10:28:13.404443 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-m8pl8\" (UID: \"47101850-5ecb-4158-8a6e-2c4541850b48\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" Feb 23 10:28:13 crc kubenswrapper[4904]: I0223 10:28:13.506912 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-m8pl8\" (UID: \"47101850-5ecb-4158-8a6e-2c4541850b48\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" Feb 23 10:28:13 crc kubenswrapper[4904]: I0223 10:28:13.506975 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-m8pl8\" (UID: \"47101850-5ecb-4158-8a6e-2c4541850b48\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" Feb 23 10:28:13 crc kubenswrapper[4904]: I0223 10:28:13.507150 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f9td\" (UniqueName: \"kubernetes.io/projected/47101850-5ecb-4158-8a6e-2c4541850b48-kube-api-access-5f9td\") pod \"dnsmasq-dns-89c5cd4d5-m8pl8\" (UID: \"47101850-5ecb-4158-8a6e-2c4541850b48\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" Feb 23 10:28:13 crc kubenswrapper[4904]: I0223 10:28:13.507199 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-config\") pod \"dnsmasq-dns-89c5cd4d5-m8pl8\" (UID: \"47101850-5ecb-4158-8a6e-2c4541850b48\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" Feb 23 10:28:13 crc kubenswrapper[4904]: I0223 10:28:13.507250 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-m8pl8\" (UID: \"47101850-5ecb-4158-8a6e-2c4541850b48\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" Feb 23 10:28:13 crc kubenswrapper[4904]: I0223 10:28:13.507308 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-m8pl8\" (UID: \"47101850-5ecb-4158-8a6e-2c4541850b48\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" Feb 23 10:28:13 crc kubenswrapper[4904]: I0223 10:28:13.508598 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-m8pl8\" (UID: \"47101850-5ecb-4158-8a6e-2c4541850b48\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" Feb 23 10:28:13 crc kubenswrapper[4904]: I0223 10:28:13.508610 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-m8pl8\" (UID: \"47101850-5ecb-4158-8a6e-2c4541850b48\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" Feb 23 10:28:13 crc kubenswrapper[4904]: I0223 10:28:13.508740 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-m8pl8\" (UID: \"47101850-5ecb-4158-8a6e-2c4541850b48\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" Feb 23 10:28:13 crc kubenswrapper[4904]: I0223 10:28:13.510164 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-config\") pod \"dnsmasq-dns-89c5cd4d5-m8pl8\" (UID: \"47101850-5ecb-4158-8a6e-2c4541850b48\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" Feb 23 10:28:13 crc kubenswrapper[4904]: I0223 10:28:13.510787 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-m8pl8\" (UID: \"47101850-5ecb-4158-8a6e-2c4541850b48\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" Feb 23 10:28:13 crc kubenswrapper[4904]: I0223 10:28:13.543390 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f9td\" (UniqueName: \"kubernetes.io/projected/47101850-5ecb-4158-8a6e-2c4541850b48-kube-api-access-5f9td\") pod \"dnsmasq-dns-89c5cd4d5-m8pl8\" (UID: \"47101850-5ecb-4158-8a6e-2c4541850b48\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" Feb 23 10:28:13 crc kubenswrapper[4904]: I0223 10:28:13.699157 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" Feb 23 10:28:14 crc kubenswrapper[4904]: I0223 10:28:14.354811 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-m8pl8"] Feb 23 10:28:15 crc kubenswrapper[4904]: I0223 10:28:15.095147 4904 generic.go:334] "Generic (PLEG): container finished" podID="47101850-5ecb-4158-8a6e-2c4541850b48" containerID="550f3df47923553542ce5205731d33ad359c3393e7d8f3c0434b26626eeb7970" exitCode=0 Feb 23 10:28:15 crc kubenswrapper[4904]: I0223 10:28:15.095436 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" event={"ID":"47101850-5ecb-4158-8a6e-2c4541850b48","Type":"ContainerDied","Data":"550f3df47923553542ce5205731d33ad359c3393e7d8f3c0434b26626eeb7970"} Feb 23 10:28:15 crc kubenswrapper[4904]: I0223 10:28:15.095785 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" event={"ID":"47101850-5ecb-4158-8a6e-2c4541850b48","Type":"ContainerStarted","Data":"89a06f7ff63e4bfb522b997f34f137f39b92f890ca2d06298b0ea285aca87ce1"} Feb 23 10:28:15 crc kubenswrapper[4904]: I0223 10:28:15.866814 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:28:16 crc kubenswrapper[4904]: I0223 10:28:16.038183 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 10:28:16 crc kubenswrapper[4904]: I0223 10:28:16.108676 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" event={"ID":"47101850-5ecb-4158-8a6e-2c4541850b48","Type":"ContainerStarted","Data":"dd88e7ac49caeb03a381d362bc656b97437dcd3e8c7aa2e3770a06ba4cbaee7b"} Feb 23 10:28:16 crc kubenswrapper[4904]: I0223 10:28:16.108818 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5bfe9e77-e9fc-4e65-b187-7be1c729739b" containerName="nova-api-log" containerID="cri-o://389aa706248ea848f5c4a2dc05561beec015f9b73516e45c1044c44fe3ce6c95" gracePeriod=30 Feb 23 10:28:16 crc kubenswrapper[4904]: I0223 10:28:16.108905 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5bfe9e77-e9fc-4e65-b187-7be1c729739b" containerName="nova-api-api" containerID="cri-o://e4cc81dabc83dfd3db7cf64e62a58606055f9bf03c24981434671cafa8f4ab26" gracePeriod=30 Feb 23 10:28:16 crc kubenswrapper[4904]: I0223 10:28:16.109244 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" Feb 23 10:28:16 crc kubenswrapper[4904]: I0223 10:28:16.135666 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" podStartSLOduration=3.135638622 podStartE2EDuration="3.135638622s" podCreationTimestamp="2026-02-23 10:28:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:28:16.129732254 +0000 UTC m=+1329.550105777" watchObservedRunningTime="2026-02-23 10:28:16.135638622 +0000 UTC m=+1329.556012125" Feb 23 10:28:17 crc kubenswrapper[4904]: I0223 10:28:17.129137 4904 generic.go:334] "Generic (PLEG): container finished" podID="5bfe9e77-e9fc-4e65-b187-7be1c729739b" containerID="389aa706248ea848f5c4a2dc05561beec015f9b73516e45c1044c44fe3ce6c95" exitCode=143 Feb 23 10:28:17 crc kubenswrapper[4904]: I0223 10:28:17.130429 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5bfe9e77-e9fc-4e65-b187-7be1c729739b","Type":"ContainerDied","Data":"389aa706248ea848f5c4a2dc05561beec015f9b73516e45c1044c44fe3ce6c95"} Feb 23 10:28:17 crc kubenswrapper[4904]: I0223 10:28:17.398242 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:28:17 crc kubenswrapper[4904]: I0223 10:28:17.398303 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:28:17 crc kubenswrapper[4904]: I0223 10:28:17.523986 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:28:17 crc kubenswrapper[4904]: I0223 10:28:17.524319 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1c300ac9-b7ea-4f33-8216-eac39c5497a4" containerName="ceilometer-central-agent" containerID="cri-o://c5a7084c28063e51e8b27130572eb10f8fb6614c5f5b7c5436cf9e4a1317931a" gracePeriod=30 Feb 23 10:28:17 crc kubenswrapper[4904]: I0223 10:28:17.524377 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1c300ac9-b7ea-4f33-8216-eac39c5497a4" containerName="sg-core" containerID="cri-o://2e8d558a650745226bcd9e90b18aec0070b949fdbca68085f3e8ec64d1b9e4f7" gracePeriod=30 Feb 23 10:28:17 crc kubenswrapper[4904]: I0223 10:28:17.524383 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1c300ac9-b7ea-4f33-8216-eac39c5497a4" containerName="proxy-httpd" containerID="cri-o://a2583273f45249b1268860292d45966fc83514a0455615c9653dd6f82cf0efc9" gracePeriod=30 Feb 23 10:28:17 crc kubenswrapper[4904]: I0223 10:28:17.524428 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1c300ac9-b7ea-4f33-8216-eac39c5497a4" containerName="ceilometer-notification-agent" containerID="cri-o://6c80192ec27fed4edb18e29a9de87ad4e3c0c0e2c7c12db4015b8b82a0668cb8" gracePeriod=30 Feb 23 10:28:18 crc kubenswrapper[4904]: I0223 10:28:18.142816 4904 generic.go:334] "Generic (PLEG): container finished" podID="1c300ac9-b7ea-4f33-8216-eac39c5497a4" containerID="a2583273f45249b1268860292d45966fc83514a0455615c9653dd6f82cf0efc9" exitCode=0 Feb 23 10:28:18 crc kubenswrapper[4904]: I0223 10:28:18.142838 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c300ac9-b7ea-4f33-8216-eac39c5497a4","Type":"ContainerDied","Data":"a2583273f45249b1268860292d45966fc83514a0455615c9653dd6f82cf0efc9"} Feb 23 10:28:18 crc kubenswrapper[4904]: I0223 10:28:18.142921 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c300ac9-b7ea-4f33-8216-eac39c5497a4","Type":"ContainerDied","Data":"2e8d558a650745226bcd9e90b18aec0070b949fdbca68085f3e8ec64d1b9e4f7"} Feb 23 10:28:18 crc kubenswrapper[4904]: I0223 10:28:18.142872 4904 generic.go:334] "Generic (PLEG): container finished" podID="1c300ac9-b7ea-4f33-8216-eac39c5497a4" containerID="2e8d558a650745226bcd9e90b18aec0070b949fdbca68085f3e8ec64d1b9e4f7" exitCode=2 Feb 23 10:28:18 crc kubenswrapper[4904]: I0223 10:28:18.142955 4904 generic.go:334] "Generic (PLEG): container finished" podID="1c300ac9-b7ea-4f33-8216-eac39c5497a4" containerID="c5a7084c28063e51e8b27130572eb10f8fb6614c5f5b7c5436cf9e4a1317931a" exitCode=0 Feb 23 10:28:18 crc kubenswrapper[4904]: I0223 10:28:18.142979 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c300ac9-b7ea-4f33-8216-eac39c5497a4","Type":"ContainerDied","Data":"c5a7084c28063e51e8b27130572eb10f8fb6614c5f5b7c5436cf9e4a1317931a"} Feb 23 10:28:19 crc kubenswrapper[4904]: I0223 10:28:19.831217 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 10:28:19 crc kubenswrapper[4904]: I0223 10:28:19.923173 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:28:19 crc kubenswrapper[4904]: I0223 10:28:19.981311 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bfe9e77-e9fc-4e65-b187-7be1c729739b-logs\") pod \"5bfe9e77-e9fc-4e65-b187-7be1c729739b\" (UID: \"5bfe9e77-e9fc-4e65-b187-7be1c729739b\") " Feb 23 10:28:19 crc kubenswrapper[4904]: I0223 10:28:19.981530 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfe9e77-e9fc-4e65-b187-7be1c729739b-combined-ca-bundle\") pod \"5bfe9e77-e9fc-4e65-b187-7be1c729739b\" (UID: \"5bfe9e77-e9fc-4e65-b187-7be1c729739b\") " Feb 23 10:28:19 crc kubenswrapper[4904]: I0223 10:28:19.981586 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bfe9e77-e9fc-4e65-b187-7be1c729739b-config-data\") pod \"5bfe9e77-e9fc-4e65-b187-7be1c729739b\" (UID: \"5bfe9e77-e9fc-4e65-b187-7be1c729739b\") " Feb 23 10:28:19 crc kubenswrapper[4904]: I0223 10:28:19.981647 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6p8p\" (UniqueName: \"kubernetes.io/projected/5bfe9e77-e9fc-4e65-b187-7be1c729739b-kube-api-access-c6p8p\") pod \"5bfe9e77-e9fc-4e65-b187-7be1c729739b\" (UID: \"5bfe9e77-e9fc-4e65-b187-7be1c729739b\") " Feb 23 10:28:19 crc kubenswrapper[4904]: I0223 10:28:19.982107 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bfe9e77-e9fc-4e65-b187-7be1c729739b-logs" (OuterVolumeSpecName: "logs") pod "5bfe9e77-e9fc-4e65-b187-7be1c729739b" (UID: "5bfe9e77-e9fc-4e65-b187-7be1c729739b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:28:19 crc kubenswrapper[4904]: I0223 10:28:19.989399 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bfe9e77-e9fc-4e65-b187-7be1c729739b-kube-api-access-c6p8p" (OuterVolumeSpecName: "kube-api-access-c6p8p") pod "5bfe9e77-e9fc-4e65-b187-7be1c729739b" (UID: "5bfe9e77-e9fc-4e65-b187-7be1c729739b"). InnerVolumeSpecName "kube-api-access-c6p8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.016753 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bfe9e77-e9fc-4e65-b187-7be1c729739b-config-data" (OuterVolumeSpecName: "config-data") pod "5bfe9e77-e9fc-4e65-b187-7be1c729739b" (UID: "5bfe9e77-e9fc-4e65-b187-7be1c729739b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.023794 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bfe9e77-e9fc-4e65-b187-7be1c729739b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bfe9e77-e9fc-4e65-b187-7be1c729739b" (UID: "5bfe9e77-e9fc-4e65-b187-7be1c729739b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.082962 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-scripts\") pod \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.083076 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c300ac9-b7ea-4f33-8216-eac39c5497a4-run-httpd\") pod \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.083184 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-ceilometer-tls-certs\") pod \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.083803 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2dr7\" (UniqueName: \"kubernetes.io/projected/1c300ac9-b7ea-4f33-8216-eac39c5497a4-kube-api-access-f2dr7\") pod \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.083996 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-sg-core-conf-yaml\") pod \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.084049 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-config-data\") pod \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.084128 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c300ac9-b7ea-4f33-8216-eac39c5497a4-log-httpd\") pod \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.084233 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-combined-ca-bundle\") pod \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\" (UID: \"1c300ac9-b7ea-4f33-8216-eac39c5497a4\") " Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.084557 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c300ac9-b7ea-4f33-8216-eac39c5497a4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1c300ac9-b7ea-4f33-8216-eac39c5497a4" (UID: "1c300ac9-b7ea-4f33-8216-eac39c5497a4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.084679 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c300ac9-b7ea-4f33-8216-eac39c5497a4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1c300ac9-b7ea-4f33-8216-eac39c5497a4" (UID: "1c300ac9-b7ea-4f33-8216-eac39c5497a4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.085627 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfe9e77-e9fc-4e65-b187-7be1c729739b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.085665 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bfe9e77-e9fc-4e65-b187-7be1c729739b-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.085680 4904 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c300ac9-b7ea-4f33-8216-eac39c5497a4-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.085695 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6p8p\" (UniqueName: \"kubernetes.io/projected/5bfe9e77-e9fc-4e65-b187-7be1c729739b-kube-api-access-c6p8p\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.085729 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bfe9e77-e9fc-4e65-b187-7be1c729739b-logs\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.085743 4904 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c300ac9-b7ea-4f33-8216-eac39c5497a4-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.086941 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-scripts" (OuterVolumeSpecName: "scripts") pod "1c300ac9-b7ea-4f33-8216-eac39c5497a4" (UID: "1c300ac9-b7ea-4f33-8216-eac39c5497a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.088675 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c300ac9-b7ea-4f33-8216-eac39c5497a4-kube-api-access-f2dr7" (OuterVolumeSpecName: "kube-api-access-f2dr7") pod "1c300ac9-b7ea-4f33-8216-eac39c5497a4" (UID: "1c300ac9-b7ea-4f33-8216-eac39c5497a4"). InnerVolumeSpecName "kube-api-access-f2dr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.114236 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1c300ac9-b7ea-4f33-8216-eac39c5497a4" (UID: "1c300ac9-b7ea-4f33-8216-eac39c5497a4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.161149 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "1c300ac9-b7ea-4f33-8216-eac39c5497a4" (UID: "1c300ac9-b7ea-4f33-8216-eac39c5497a4"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.168311 4904 generic.go:334] "Generic (PLEG): container finished" podID="5bfe9e77-e9fc-4e65-b187-7be1c729739b" containerID="e4cc81dabc83dfd3db7cf64e62a58606055f9bf03c24981434671cafa8f4ab26" exitCode=0 Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.168384 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5bfe9e77-e9fc-4e65-b187-7be1c729739b","Type":"ContainerDied","Data":"e4cc81dabc83dfd3db7cf64e62a58606055f9bf03c24981434671cafa8f4ab26"} Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.168413 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.168504 4904 scope.go:117] "RemoveContainer" containerID="e4cc81dabc83dfd3db7cf64e62a58606055f9bf03c24981434671cafa8f4ab26" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.168486 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5bfe9e77-e9fc-4e65-b187-7be1c729739b","Type":"ContainerDied","Data":"2d49eb3ed2fb7623af90c89bd569c17031f4e9780f7e013e32b036afb0f13d8a"} Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.178369 4904 generic.go:334] "Generic (PLEG): container finished" podID="1c300ac9-b7ea-4f33-8216-eac39c5497a4" containerID="6c80192ec27fed4edb18e29a9de87ad4e3c0c0e2c7c12db4015b8b82a0668cb8" exitCode=0 Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.178421 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c300ac9-b7ea-4f33-8216-eac39c5497a4","Type":"ContainerDied","Data":"6c80192ec27fed4edb18e29a9de87ad4e3c0c0e2c7c12db4015b8b82a0668cb8"} Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.178455 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c300ac9-b7ea-4f33-8216-eac39c5497a4","Type":"ContainerDied","Data":"73aebdce89d087f489e2032becbad82e867e5bf6998e66f3e4094de2d6192ac7"} Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.178548 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.181117 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c300ac9-b7ea-4f33-8216-eac39c5497a4" (UID: "1c300ac9-b7ea-4f33-8216-eac39c5497a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.192499 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.192543 4904 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.192562 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2dr7\" (UniqueName: \"kubernetes.io/projected/1c300ac9-b7ea-4f33-8216-eac39c5497a4-kube-api-access-f2dr7\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.192574 4904 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.192584 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.217003 4904 scope.go:117] "RemoveContainer" containerID="389aa706248ea848f5c4a2dc05561beec015f9b73516e45c1044c44fe3ce6c95" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.265166 4904 scope.go:117] "RemoveContainer" containerID="e4cc81dabc83dfd3db7cf64e62a58606055f9bf03c24981434671cafa8f4ab26" Feb 23 10:28:20 crc kubenswrapper[4904]: E0223 10:28:20.272081 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4cc81dabc83dfd3db7cf64e62a58606055f9bf03c24981434671cafa8f4ab26\": container with ID starting with e4cc81dabc83dfd3db7cf64e62a58606055f9bf03c24981434671cafa8f4ab26 not found: ID does not exist" containerID="e4cc81dabc83dfd3db7cf64e62a58606055f9bf03c24981434671cafa8f4ab26" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.272137 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4cc81dabc83dfd3db7cf64e62a58606055f9bf03c24981434671cafa8f4ab26"} err="failed to get container status \"e4cc81dabc83dfd3db7cf64e62a58606055f9bf03c24981434671cafa8f4ab26\": rpc error: code = NotFound desc = could not find container \"e4cc81dabc83dfd3db7cf64e62a58606055f9bf03c24981434671cafa8f4ab26\": container with ID starting with e4cc81dabc83dfd3db7cf64e62a58606055f9bf03c24981434671cafa8f4ab26 not found: ID does not exist" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.272179 4904 scope.go:117] "RemoveContainer" containerID="389aa706248ea848f5c4a2dc05561beec015f9b73516e45c1044c44fe3ce6c95" Feb 23 10:28:20 crc kubenswrapper[4904]: E0223 10:28:20.278620 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"389aa706248ea848f5c4a2dc05561beec015f9b73516e45c1044c44fe3ce6c95\": container with ID starting with 389aa706248ea848f5c4a2dc05561beec015f9b73516e45c1044c44fe3ce6c95 not found: ID does not exist" containerID="389aa706248ea848f5c4a2dc05561beec015f9b73516e45c1044c44fe3ce6c95" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.288447 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"389aa706248ea848f5c4a2dc05561beec015f9b73516e45c1044c44fe3ce6c95"} err="failed to get container status \"389aa706248ea848f5c4a2dc05561beec015f9b73516e45c1044c44fe3ce6c95\": rpc error: code = NotFound desc = could not find container \"389aa706248ea848f5c4a2dc05561beec015f9b73516e45c1044c44fe3ce6c95\": container with ID starting with 389aa706248ea848f5c4a2dc05561beec015f9b73516e45c1044c44fe3ce6c95 not found: ID does not exist" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.291743 4904 scope.go:117] "RemoveContainer" containerID="a2583273f45249b1268860292d45966fc83514a0455615c9653dd6f82cf0efc9" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.292369 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.293079 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-config-data" (OuterVolumeSpecName: "config-data") pod "1c300ac9-b7ea-4f33-8216-eac39c5497a4" (UID: "1c300ac9-b7ea-4f33-8216-eac39c5497a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.301244 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c300ac9-b7ea-4f33-8216-eac39c5497a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.317558 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.326345 4904 scope.go:117] "RemoveContainer" containerID="2e8d558a650745226bcd9e90b18aec0070b949fdbca68085f3e8ec64d1b9e4f7" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.334106 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 23 10:28:20 crc kubenswrapper[4904]: E0223 10:28:20.335084 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bfe9e77-e9fc-4e65-b187-7be1c729739b" containerName="nova-api-log" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.335116 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bfe9e77-e9fc-4e65-b187-7be1c729739b" containerName="nova-api-log" Feb 23 10:28:20 crc kubenswrapper[4904]: E0223 10:28:20.335127 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bfe9e77-e9fc-4e65-b187-7be1c729739b" containerName="nova-api-api" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.335136 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bfe9e77-e9fc-4e65-b187-7be1c729739b" containerName="nova-api-api" Feb 23 10:28:20 crc kubenswrapper[4904]: E0223 10:28:20.335158 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c300ac9-b7ea-4f33-8216-eac39c5497a4" containerName="ceilometer-central-agent" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.335167 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c300ac9-b7ea-4f33-8216-eac39c5497a4" containerName="ceilometer-central-agent" Feb 23 10:28:20 crc kubenswrapper[4904]: E0223 10:28:20.335182 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c300ac9-b7ea-4f33-8216-eac39c5497a4" containerName="ceilometer-notification-agent" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.335190 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c300ac9-b7ea-4f33-8216-eac39c5497a4" containerName="ceilometer-notification-agent" Feb 23 10:28:20 crc kubenswrapper[4904]: E0223 10:28:20.335210 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c300ac9-b7ea-4f33-8216-eac39c5497a4" containerName="sg-core" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.335219 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c300ac9-b7ea-4f33-8216-eac39c5497a4" containerName="sg-core" Feb 23 10:28:20 crc kubenswrapper[4904]: E0223 10:28:20.335258 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c300ac9-b7ea-4f33-8216-eac39c5497a4" containerName="proxy-httpd" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.335266 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c300ac9-b7ea-4f33-8216-eac39c5497a4" containerName="proxy-httpd" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.335485 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bfe9e77-e9fc-4e65-b187-7be1c729739b" containerName="nova-api-api" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.335505 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c300ac9-b7ea-4f33-8216-eac39c5497a4" containerName="ceilometer-notification-agent" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.335519 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bfe9e77-e9fc-4e65-b187-7be1c729739b" containerName="nova-api-log" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.335529 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c300ac9-b7ea-4f33-8216-eac39c5497a4" containerName="ceilometer-central-agent" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.335538 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c300ac9-b7ea-4f33-8216-eac39c5497a4" containerName="sg-core" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.335548 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c300ac9-b7ea-4f33-8216-eac39c5497a4" containerName="proxy-httpd" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.337595 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.340462 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.340660 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.340833 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.362319 4904 scope.go:117] "RemoveContainer" containerID="6c80192ec27fed4edb18e29a9de87ad4e3c0c0e2c7c12db4015b8b82a0668cb8" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.365908 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.403443 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q6s2\" (UniqueName: \"kubernetes.io/projected/d3f9b33d-487c-401e-9b53-7d41616549aa-kube-api-access-5q6s2\") pod \"nova-api-0\" (UID: \"d3f9b33d-487c-401e-9b53-7d41616549aa\") " pod="openstack/nova-api-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.403570 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3f9b33d-487c-401e-9b53-7d41616549aa-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d3f9b33d-487c-401e-9b53-7d41616549aa\") " pod="openstack/nova-api-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.403818 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3f9b33d-487c-401e-9b53-7d41616549aa-config-data\") pod \"nova-api-0\" (UID: \"d3f9b33d-487c-401e-9b53-7d41616549aa\") " pod="openstack/nova-api-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.404065 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3f9b33d-487c-401e-9b53-7d41616549aa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d3f9b33d-487c-401e-9b53-7d41616549aa\") " pod="openstack/nova-api-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.404227 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3f9b33d-487c-401e-9b53-7d41616549aa-logs\") pod \"nova-api-0\" (UID: \"d3f9b33d-487c-401e-9b53-7d41616549aa\") " pod="openstack/nova-api-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.404275 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3f9b33d-487c-401e-9b53-7d41616549aa-public-tls-certs\") pod \"nova-api-0\" (UID: \"d3f9b33d-487c-401e-9b53-7d41616549aa\") " pod="openstack/nova-api-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.471180 4904 scope.go:117] "RemoveContainer" containerID="c5a7084c28063e51e8b27130572eb10f8fb6614c5f5b7c5436cf9e4a1317931a" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.502735 4904 scope.go:117] "RemoveContainer" containerID="a2583273f45249b1268860292d45966fc83514a0455615c9653dd6f82cf0efc9" Feb 23 10:28:20 crc kubenswrapper[4904]: E0223 10:28:20.503359 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2583273f45249b1268860292d45966fc83514a0455615c9653dd6f82cf0efc9\": container with ID starting with a2583273f45249b1268860292d45966fc83514a0455615c9653dd6f82cf0efc9 not found: ID does not exist" containerID="a2583273f45249b1268860292d45966fc83514a0455615c9653dd6f82cf0efc9" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.503389 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2583273f45249b1268860292d45966fc83514a0455615c9653dd6f82cf0efc9"} err="failed to get container status \"a2583273f45249b1268860292d45966fc83514a0455615c9653dd6f82cf0efc9\": rpc error: code = NotFound desc = could not find container \"a2583273f45249b1268860292d45966fc83514a0455615c9653dd6f82cf0efc9\": container with ID starting with a2583273f45249b1268860292d45966fc83514a0455615c9653dd6f82cf0efc9 not found: ID does not exist" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.503408 4904 scope.go:117] "RemoveContainer" containerID="2e8d558a650745226bcd9e90b18aec0070b949fdbca68085f3e8ec64d1b9e4f7" Feb 23 10:28:20 crc kubenswrapper[4904]: E0223 10:28:20.503703 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e8d558a650745226bcd9e90b18aec0070b949fdbca68085f3e8ec64d1b9e4f7\": container with ID starting with 2e8d558a650745226bcd9e90b18aec0070b949fdbca68085f3e8ec64d1b9e4f7 not found: ID does not exist" containerID="2e8d558a650745226bcd9e90b18aec0070b949fdbca68085f3e8ec64d1b9e4f7" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.503752 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e8d558a650745226bcd9e90b18aec0070b949fdbca68085f3e8ec64d1b9e4f7"} err="failed to get container status \"2e8d558a650745226bcd9e90b18aec0070b949fdbca68085f3e8ec64d1b9e4f7\": rpc error: code = NotFound desc = could not find container \"2e8d558a650745226bcd9e90b18aec0070b949fdbca68085f3e8ec64d1b9e4f7\": container with ID starting with 2e8d558a650745226bcd9e90b18aec0070b949fdbca68085f3e8ec64d1b9e4f7 not found: ID does not exist" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.503776 4904 scope.go:117] "RemoveContainer" containerID="6c80192ec27fed4edb18e29a9de87ad4e3c0c0e2c7c12db4015b8b82a0668cb8" Feb 23 10:28:20 crc kubenswrapper[4904]: E0223 10:28:20.504050 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c80192ec27fed4edb18e29a9de87ad4e3c0c0e2c7c12db4015b8b82a0668cb8\": container with ID starting with 6c80192ec27fed4edb18e29a9de87ad4e3c0c0e2c7c12db4015b8b82a0668cb8 not found: ID does not exist" containerID="6c80192ec27fed4edb18e29a9de87ad4e3c0c0e2c7c12db4015b8b82a0668cb8" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.504073 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c80192ec27fed4edb18e29a9de87ad4e3c0c0e2c7c12db4015b8b82a0668cb8"} err="failed to get container status \"6c80192ec27fed4edb18e29a9de87ad4e3c0c0e2c7c12db4015b8b82a0668cb8\": rpc error: code = NotFound desc = could not find container \"6c80192ec27fed4edb18e29a9de87ad4e3c0c0e2c7c12db4015b8b82a0668cb8\": container with ID starting with 6c80192ec27fed4edb18e29a9de87ad4e3c0c0e2c7c12db4015b8b82a0668cb8 not found: ID does not exist" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.504088 4904 scope.go:117] "RemoveContainer" containerID="c5a7084c28063e51e8b27130572eb10f8fb6614c5f5b7c5436cf9e4a1317931a" Feb 23 10:28:20 crc kubenswrapper[4904]: E0223 10:28:20.504517 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a7084c28063e51e8b27130572eb10f8fb6614c5f5b7c5436cf9e4a1317931a\": container with ID starting with c5a7084c28063e51e8b27130572eb10f8fb6614c5f5b7c5436cf9e4a1317931a not found: ID does not exist" containerID="c5a7084c28063e51e8b27130572eb10f8fb6614c5f5b7c5436cf9e4a1317931a" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.504538 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a7084c28063e51e8b27130572eb10f8fb6614c5f5b7c5436cf9e4a1317931a"} err="failed to get container status \"c5a7084c28063e51e8b27130572eb10f8fb6614c5f5b7c5436cf9e4a1317931a\": rpc error: code = NotFound desc = could not find container \"c5a7084c28063e51e8b27130572eb10f8fb6614c5f5b7c5436cf9e4a1317931a\": container with ID starting with c5a7084c28063e51e8b27130572eb10f8fb6614c5f5b7c5436cf9e4a1317931a not found: ID does not exist" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.507160 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q6s2\" (UniqueName: \"kubernetes.io/projected/d3f9b33d-487c-401e-9b53-7d41616549aa-kube-api-access-5q6s2\") pod \"nova-api-0\" (UID: \"d3f9b33d-487c-401e-9b53-7d41616549aa\") " pod="openstack/nova-api-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.507248 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3f9b33d-487c-401e-9b53-7d41616549aa-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d3f9b33d-487c-401e-9b53-7d41616549aa\") " pod="openstack/nova-api-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.507298 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3f9b33d-487c-401e-9b53-7d41616549aa-config-data\") pod \"nova-api-0\" (UID: \"d3f9b33d-487c-401e-9b53-7d41616549aa\") " pod="openstack/nova-api-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.507366 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3f9b33d-487c-401e-9b53-7d41616549aa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d3f9b33d-487c-401e-9b53-7d41616549aa\") " pod="openstack/nova-api-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.507433 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3f9b33d-487c-401e-9b53-7d41616549aa-logs\") pod \"nova-api-0\" (UID: \"d3f9b33d-487c-401e-9b53-7d41616549aa\") " pod="openstack/nova-api-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.507469 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3f9b33d-487c-401e-9b53-7d41616549aa-public-tls-certs\") pod \"nova-api-0\" (UID: \"d3f9b33d-487c-401e-9b53-7d41616549aa\") " pod="openstack/nova-api-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.511185 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3f9b33d-487c-401e-9b53-7d41616549aa-logs\") pod \"nova-api-0\" (UID: \"d3f9b33d-487c-401e-9b53-7d41616549aa\") " pod="openstack/nova-api-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.516058 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3f9b33d-487c-401e-9b53-7d41616549aa-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d3f9b33d-487c-401e-9b53-7d41616549aa\") " pod="openstack/nova-api-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.518296 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3f9b33d-487c-401e-9b53-7d41616549aa-config-data\") pod \"nova-api-0\" (UID: \"d3f9b33d-487c-401e-9b53-7d41616549aa\") " pod="openstack/nova-api-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.519669 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3f9b33d-487c-401e-9b53-7d41616549aa-public-tls-certs\") pod \"nova-api-0\" (UID: \"d3f9b33d-487c-401e-9b53-7d41616549aa\") " pod="openstack/nova-api-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.525606 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.527164 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3f9b33d-487c-401e-9b53-7d41616549aa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d3f9b33d-487c-401e-9b53-7d41616549aa\") " pod="openstack/nova-api-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.534767 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.547386 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q6s2\" (UniqueName: \"kubernetes.io/projected/d3f9b33d-487c-401e-9b53-7d41616549aa-kube-api-access-5q6s2\") pod \"nova-api-0\" (UID: \"d3f9b33d-487c-401e-9b53-7d41616549aa\") " pod="openstack/nova-api-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.571230 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.573891 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.585378 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.624973 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.625238 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.625361 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.716056 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjb8h\" (UniqueName: \"kubernetes.io/projected/b4cc4234-fb72-4d00-95a6-82a77f062057-kube-api-access-bjb8h\") pod \"ceilometer-0\" (UID: \"b4cc4234-fb72-4d00-95a6-82a77f062057\") " pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.716149 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b4cc4234-fb72-4d00-95a6-82a77f062057-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b4cc4234-fb72-4d00-95a6-82a77f062057\") " pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.716189 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4cc4234-fb72-4d00-95a6-82a77f062057-config-data\") pod \"ceilometer-0\" (UID: \"b4cc4234-fb72-4d00-95a6-82a77f062057\") " pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.716221 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4cc4234-fb72-4d00-95a6-82a77f062057-scripts\") pod \"ceilometer-0\" (UID: \"b4cc4234-fb72-4d00-95a6-82a77f062057\") " pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.716280 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cc4234-fb72-4d00-95a6-82a77f062057-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b4cc4234-fb72-4d00-95a6-82a77f062057\") " pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.716474 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4cc4234-fb72-4d00-95a6-82a77f062057-run-httpd\") pod \"ceilometer-0\" (UID: \"b4cc4234-fb72-4d00-95a6-82a77f062057\") " pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.716557 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4cc4234-fb72-4d00-95a6-82a77f062057-log-httpd\") pod \"ceilometer-0\" (UID: \"b4cc4234-fb72-4d00-95a6-82a77f062057\") " pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.716598 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4cc4234-fb72-4d00-95a6-82a77f062057-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b4cc4234-fb72-4d00-95a6-82a77f062057\") " pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.746612 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.818190 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4cc4234-fb72-4d00-95a6-82a77f062057-config-data\") pod \"ceilometer-0\" (UID: \"b4cc4234-fb72-4d00-95a6-82a77f062057\") " pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.818252 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4cc4234-fb72-4d00-95a6-82a77f062057-scripts\") pod \"ceilometer-0\" (UID: \"b4cc4234-fb72-4d00-95a6-82a77f062057\") " pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.818309 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cc4234-fb72-4d00-95a6-82a77f062057-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b4cc4234-fb72-4d00-95a6-82a77f062057\") " pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.818362 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4cc4234-fb72-4d00-95a6-82a77f062057-run-httpd\") pod \"ceilometer-0\" (UID: \"b4cc4234-fb72-4d00-95a6-82a77f062057\") " pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.818401 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4cc4234-fb72-4d00-95a6-82a77f062057-log-httpd\") pod \"ceilometer-0\" (UID: \"b4cc4234-fb72-4d00-95a6-82a77f062057\") " pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.818425 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4cc4234-fb72-4d00-95a6-82a77f062057-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b4cc4234-fb72-4d00-95a6-82a77f062057\") " pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.818471 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjb8h\" (UniqueName: \"kubernetes.io/projected/b4cc4234-fb72-4d00-95a6-82a77f062057-kube-api-access-bjb8h\") pod \"ceilometer-0\" (UID: \"b4cc4234-fb72-4d00-95a6-82a77f062057\") " pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.818526 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b4cc4234-fb72-4d00-95a6-82a77f062057-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b4cc4234-fb72-4d00-95a6-82a77f062057\") " pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.820330 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4cc4234-fb72-4d00-95a6-82a77f062057-run-httpd\") pod \"ceilometer-0\" (UID: \"b4cc4234-fb72-4d00-95a6-82a77f062057\") " pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.825949 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4cc4234-fb72-4d00-95a6-82a77f062057-log-httpd\") pod \"ceilometer-0\" (UID: \"b4cc4234-fb72-4d00-95a6-82a77f062057\") " pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.838246 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b4cc4234-fb72-4d00-95a6-82a77f062057-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b4cc4234-fb72-4d00-95a6-82a77f062057\") " pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.840249 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4cc4234-fb72-4d00-95a6-82a77f062057-config-data\") pod \"ceilometer-0\" (UID: \"b4cc4234-fb72-4d00-95a6-82a77f062057\") " pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.849115 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4cc4234-fb72-4d00-95a6-82a77f062057-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b4cc4234-fb72-4d00-95a6-82a77f062057\") " pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.849998 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cc4234-fb72-4d00-95a6-82a77f062057-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b4cc4234-fb72-4d00-95a6-82a77f062057\") " pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.863670 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4cc4234-fb72-4d00-95a6-82a77f062057-scripts\") pod \"ceilometer-0\" (UID: \"b4cc4234-fb72-4d00-95a6-82a77f062057\") " pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.867689 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.868410 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjb8h\" (UniqueName: \"kubernetes.io/projected/b4cc4234-fb72-4d00-95a6-82a77f062057-kube-api-access-bjb8h\") pod \"ceilometer-0\" (UID: \"b4cc4234-fb72-4d00-95a6-82a77f062057\") " pod="openstack/ceilometer-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.898911 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:28:20 crc kubenswrapper[4904]: I0223 10:28:20.959172 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 10:28:21 crc kubenswrapper[4904]: I0223 10:28:21.218636 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 23 10:28:21 crc kubenswrapper[4904]: I0223 10:28:21.269877 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c300ac9-b7ea-4f33-8216-eac39c5497a4" path="/var/lib/kubelet/pods/1c300ac9-b7ea-4f33-8216-eac39c5497a4/volumes" Feb 23 10:28:21 crc kubenswrapper[4904]: I0223 10:28:21.273405 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bfe9e77-e9fc-4e65-b187-7be1c729739b" path="/var/lib/kubelet/pods/5bfe9e77-e9fc-4e65-b187-7be1c729739b/volumes" Feb 23 10:28:21 crc kubenswrapper[4904]: I0223 10:28:21.400741 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 10:28:21 crc kubenswrapper[4904]: I0223 10:28:21.478917 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-6q59t"] Feb 23 10:28:21 crc kubenswrapper[4904]: I0223 10:28:21.481088 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6q59t" Feb 23 10:28:21 crc kubenswrapper[4904]: I0223 10:28:21.487367 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 23 10:28:21 crc kubenswrapper[4904]: I0223 10:28:21.487448 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 23 10:28:21 crc kubenswrapper[4904]: I0223 10:28:21.490229 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6q59t"] Feb 23 10:28:21 crc kubenswrapper[4904]: I0223 10:28:21.538989 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 10:28:21 crc kubenswrapper[4904]: W0223 10:28:21.545099 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4cc4234_fb72_4d00_95a6_82a77f062057.slice/crio-ce4b2342c028d5b39d6e0304626c23b9bcb1d95feeadb3c54567a61dc31dcf97 WatchSource:0}: Error finding container ce4b2342c028d5b39d6e0304626c23b9bcb1d95feeadb3c54567a61dc31dcf97: Status 404 returned error can't find the container with id ce4b2342c028d5b39d6e0304626c23b9bcb1d95feeadb3c54567a61dc31dcf97 Feb 23 10:28:21 crc kubenswrapper[4904]: I0223 10:28:21.557749 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6q59t\" (UID: \"3de5ba4e-08ff-4c74-b42c-037d9f6b8d58\") " pod="openstack/nova-cell1-cell-mapping-6q59t" Feb 23 10:28:21 crc kubenswrapper[4904]: I0223 10:28:21.558010 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58-config-data\") pod \"nova-cell1-cell-mapping-6q59t\" (UID: \"3de5ba4e-08ff-4c74-b42c-037d9f6b8d58\") " pod="openstack/nova-cell1-cell-mapping-6q59t" Feb 23 10:28:21 crc kubenswrapper[4904]: I0223 10:28:21.558113 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kr6h\" (UniqueName: \"kubernetes.io/projected/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58-kube-api-access-5kr6h\") pod \"nova-cell1-cell-mapping-6q59t\" (UID: \"3de5ba4e-08ff-4c74-b42c-037d9f6b8d58\") " pod="openstack/nova-cell1-cell-mapping-6q59t" Feb 23 10:28:21 crc kubenswrapper[4904]: I0223 10:28:21.558205 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58-scripts\") pod \"nova-cell1-cell-mapping-6q59t\" (UID: \"3de5ba4e-08ff-4c74-b42c-037d9f6b8d58\") " pod="openstack/nova-cell1-cell-mapping-6q59t" Feb 23 10:28:21 crc kubenswrapper[4904]: I0223 10:28:21.659911 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58-scripts\") pod \"nova-cell1-cell-mapping-6q59t\" (UID: \"3de5ba4e-08ff-4c74-b42c-037d9f6b8d58\") " pod="openstack/nova-cell1-cell-mapping-6q59t" Feb 23 10:28:21 crc kubenswrapper[4904]: I0223 10:28:21.659997 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6q59t\" (UID: \"3de5ba4e-08ff-4c74-b42c-037d9f6b8d58\") " pod="openstack/nova-cell1-cell-mapping-6q59t" Feb 23 10:28:21 crc kubenswrapper[4904]: I0223 10:28:21.660095 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58-config-data\") pod \"nova-cell1-cell-mapping-6q59t\" (UID: \"3de5ba4e-08ff-4c74-b42c-037d9f6b8d58\") " pod="openstack/nova-cell1-cell-mapping-6q59t" Feb 23 10:28:21 crc kubenswrapper[4904]: I0223 10:28:21.660137 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kr6h\" (UniqueName: \"kubernetes.io/projected/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58-kube-api-access-5kr6h\") pod \"nova-cell1-cell-mapping-6q59t\" (UID: \"3de5ba4e-08ff-4c74-b42c-037d9f6b8d58\") " pod="openstack/nova-cell1-cell-mapping-6q59t" Feb 23 10:28:21 crc kubenswrapper[4904]: I0223 10:28:21.665307 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6q59t\" (UID: \"3de5ba4e-08ff-4c74-b42c-037d9f6b8d58\") " pod="openstack/nova-cell1-cell-mapping-6q59t" Feb 23 10:28:21 crc kubenswrapper[4904]: I0223 10:28:21.666072 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58-scripts\") pod \"nova-cell1-cell-mapping-6q59t\" (UID: \"3de5ba4e-08ff-4c74-b42c-037d9f6b8d58\") " pod="openstack/nova-cell1-cell-mapping-6q59t" Feb 23 10:28:21 crc kubenswrapper[4904]: I0223 10:28:21.667015 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58-config-data\") pod \"nova-cell1-cell-mapping-6q59t\" (UID: \"3de5ba4e-08ff-4c74-b42c-037d9f6b8d58\") " pod="openstack/nova-cell1-cell-mapping-6q59t" Feb 23 10:28:21 crc kubenswrapper[4904]: I0223 10:28:21.676880 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kr6h\" (UniqueName: \"kubernetes.io/projected/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58-kube-api-access-5kr6h\") pod \"nova-cell1-cell-mapping-6q59t\" (UID: \"3de5ba4e-08ff-4c74-b42c-037d9f6b8d58\") " pod="openstack/nova-cell1-cell-mapping-6q59t" Feb 23 10:28:21 crc kubenswrapper[4904]: I0223 10:28:21.756445 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6q59t" Feb 23 10:28:22 crc kubenswrapper[4904]: I0223 10:28:22.202758 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d3f9b33d-487c-401e-9b53-7d41616549aa","Type":"ContainerStarted","Data":"57dffd3589f114530eb420260b3ea7e878a6cfe13fd0d21299e93e5d82716c6e"} Feb 23 10:28:22 crc kubenswrapper[4904]: I0223 10:28:22.203285 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d3f9b33d-487c-401e-9b53-7d41616549aa","Type":"ContainerStarted","Data":"013500ee18a2f4e5496258166864532434ad4d7df886c8dd1a9e954130d531a0"} Feb 23 10:28:22 crc kubenswrapper[4904]: I0223 10:28:22.203299 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d3f9b33d-487c-401e-9b53-7d41616549aa","Type":"ContainerStarted","Data":"fcf098229a2e9f3d0db04d97b5d5734f651f6f55e82ce084490e64ec38e78682"} Feb 23 10:28:22 crc kubenswrapper[4904]: I0223 10:28:22.206192 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4cc4234-fb72-4d00-95a6-82a77f062057","Type":"ContainerStarted","Data":"758cec91d2b8c56e24222d713ee49cb7c9f402fd66247cbaf4a77ab6c55e9182"} Feb 23 10:28:22 crc kubenswrapper[4904]: I0223 10:28:22.206252 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4cc4234-fb72-4d00-95a6-82a77f062057","Type":"ContainerStarted","Data":"ce4b2342c028d5b39d6e0304626c23b9bcb1d95feeadb3c54567a61dc31dcf97"} Feb 23 10:28:22 crc kubenswrapper[4904]: I0223 10:28:22.237901 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.237886901 podStartE2EDuration="2.237886901s" podCreationTimestamp="2026-02-23 10:28:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:28:22.235264936 +0000 UTC m=+1335.655638449" watchObservedRunningTime="2026-02-23 10:28:22.237886901 +0000 UTC m=+1335.658260404" Feb 23 10:28:22 crc kubenswrapper[4904]: I0223 10:28:22.306935 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6q59t"] Feb 23 10:28:23 crc kubenswrapper[4904]: I0223 10:28:23.229318 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6q59t" event={"ID":"3de5ba4e-08ff-4c74-b42c-037d9f6b8d58","Type":"ContainerStarted","Data":"bccdcb450f4f692b42e8199cdc9e2bc972283ec25a69293ce48f39063a515c86"} Feb 23 10:28:23 crc kubenswrapper[4904]: I0223 10:28:23.230202 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6q59t" event={"ID":"3de5ba4e-08ff-4c74-b42c-037d9f6b8d58","Type":"ContainerStarted","Data":"15277c12352e149412b27797c17cd4bfd62c9dd33da23eec829a1fd93937eed0"} Feb 23 10:28:23 crc kubenswrapper[4904]: I0223 10:28:23.238862 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4cc4234-fb72-4d00-95a6-82a77f062057","Type":"ContainerStarted","Data":"13901973222aa776d4507a9f607ac9e14fdeae65ad54e3f7756481cc115e16d6"} Feb 23 10:28:23 crc kubenswrapper[4904]: I0223 10:28:23.254868 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-6q59t" podStartSLOduration=2.254848657 podStartE2EDuration="2.254848657s" podCreationTimestamp="2026-02-23 10:28:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:28:23.2496967 +0000 UTC m=+1336.670070253" watchObservedRunningTime="2026-02-23 10:28:23.254848657 +0000 UTC m=+1336.675222170" Feb 23 10:28:23 crc kubenswrapper[4904]: I0223 10:28:23.701563 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" Feb 23 10:28:23 crc kubenswrapper[4904]: I0223 10:28:23.795543 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-fvhqv"] Feb 23 10:28:23 crc kubenswrapper[4904]: I0223 10:28:23.798983 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" podUID="f9a78324-b9aa-4d60-86f6-abf52adf1de4" containerName="dnsmasq-dns" containerID="cri-o://494425e1f3cd9dc313d0142e1f031250fae2626080d819cf5c6589ad36a93342" gracePeriod=10 Feb 23 10:28:24 crc kubenswrapper[4904]: I0223 10:28:24.058909 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" podUID="f9a78324-b9aa-4d60-86f6-abf52adf1de4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.211:5353: connect: connection refused" Feb 23 10:28:24 crc kubenswrapper[4904]: I0223 10:28:24.259582 4904 generic.go:334] "Generic (PLEG): container finished" podID="f9a78324-b9aa-4d60-86f6-abf52adf1de4" containerID="494425e1f3cd9dc313d0142e1f031250fae2626080d819cf5c6589ad36a93342" exitCode=0 Feb 23 10:28:24 crc kubenswrapper[4904]: I0223 10:28:24.260013 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" event={"ID":"f9a78324-b9aa-4d60-86f6-abf52adf1de4","Type":"ContainerDied","Data":"494425e1f3cd9dc313d0142e1f031250fae2626080d819cf5c6589ad36a93342"} Feb 23 10:28:24 crc kubenswrapper[4904]: I0223 10:28:24.265041 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4cc4234-fb72-4d00-95a6-82a77f062057","Type":"ContainerStarted","Data":"071f032ac0187fce711d637c6b14ccb6500236b09e8d6b195c02171f9a5627f5"} Feb 23 10:28:24 crc kubenswrapper[4904]: I0223 10:28:24.458378 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" Feb 23 10:28:24 crc kubenswrapper[4904]: I0223 10:28:24.539027 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-ovsdbserver-nb\") pod \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\" (UID: \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\") " Feb 23 10:28:24 crc kubenswrapper[4904]: I0223 10:28:24.539093 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-ovsdbserver-sb\") pod \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\" (UID: \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\") " Feb 23 10:28:24 crc kubenswrapper[4904]: I0223 10:28:24.539363 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hltqk\" (UniqueName: \"kubernetes.io/projected/f9a78324-b9aa-4d60-86f6-abf52adf1de4-kube-api-access-hltqk\") pod \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\" (UID: \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\") " Feb 23 10:28:24 crc kubenswrapper[4904]: I0223 10:28:24.539434 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-dns-svc\") pod \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\" (UID: \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\") " Feb 23 10:28:24 crc kubenswrapper[4904]: I0223 10:28:24.539488 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-config\") pod \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\" (UID: \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\") " Feb 23 10:28:24 crc kubenswrapper[4904]: I0223 10:28:24.539629 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-dns-swift-storage-0\") pod \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\" (UID: \"f9a78324-b9aa-4d60-86f6-abf52adf1de4\") " Feb 23 10:28:24 crc kubenswrapper[4904]: I0223 10:28:24.553125 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a78324-b9aa-4d60-86f6-abf52adf1de4-kube-api-access-hltqk" (OuterVolumeSpecName: "kube-api-access-hltqk") pod "f9a78324-b9aa-4d60-86f6-abf52adf1de4" (UID: "f9a78324-b9aa-4d60-86f6-abf52adf1de4"). InnerVolumeSpecName "kube-api-access-hltqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:28:24 crc kubenswrapper[4904]: I0223 10:28:24.620118 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f9a78324-b9aa-4d60-86f6-abf52adf1de4" (UID: "f9a78324-b9aa-4d60-86f6-abf52adf1de4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:28:24 crc kubenswrapper[4904]: I0223 10:28:24.624974 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f9a78324-b9aa-4d60-86f6-abf52adf1de4" (UID: "f9a78324-b9aa-4d60-86f6-abf52adf1de4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:28:24 crc kubenswrapper[4904]: I0223 10:28:24.634673 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f9a78324-b9aa-4d60-86f6-abf52adf1de4" (UID: "f9a78324-b9aa-4d60-86f6-abf52adf1de4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:28:24 crc kubenswrapper[4904]: I0223 10:28:24.643647 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hltqk\" (UniqueName: \"kubernetes.io/projected/f9a78324-b9aa-4d60-86f6-abf52adf1de4-kube-api-access-hltqk\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:24 crc kubenswrapper[4904]: I0223 10:28:24.643672 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:24 crc kubenswrapper[4904]: I0223 10:28:24.643682 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:24 crc kubenswrapper[4904]: I0223 10:28:24.643690 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:24 crc kubenswrapper[4904]: I0223 10:28:24.646659 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f9a78324-b9aa-4d60-86f6-abf52adf1de4" (UID: "f9a78324-b9aa-4d60-86f6-abf52adf1de4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:28:24 crc kubenswrapper[4904]: I0223 10:28:24.654554 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-config" (OuterVolumeSpecName: "config") pod "f9a78324-b9aa-4d60-86f6-abf52adf1de4" (UID: "f9a78324-b9aa-4d60-86f6-abf52adf1de4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:28:24 crc kubenswrapper[4904]: I0223 10:28:24.745944 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:24 crc kubenswrapper[4904]: I0223 10:28:24.746289 4904 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f9a78324-b9aa-4d60-86f6-abf52adf1de4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:25 crc kubenswrapper[4904]: I0223 10:28:25.294845 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" event={"ID":"f9a78324-b9aa-4d60-86f6-abf52adf1de4","Type":"ContainerDied","Data":"159ee20b0e3a3ecc9ca6f59a8bc66a3900bd1360e94ceaaae6b75fa8edfdda58"} Feb 23 10:28:25 crc kubenswrapper[4904]: I0223 10:28:25.294941 4904 scope.go:117] "RemoveContainer" containerID="494425e1f3cd9dc313d0142e1f031250fae2626080d819cf5c6589ad36a93342" Feb 23 10:28:25 crc kubenswrapper[4904]: I0223 10:28:25.295137 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-fvhqv" Feb 23 10:28:25 crc kubenswrapper[4904]: I0223 10:28:25.331464 4904 scope.go:117] "RemoveContainer" containerID="7a750c2c87b1b8286eb45732f2675051130642e3f80db7964f2d0acf05e12d11" Feb 23 10:28:25 crc kubenswrapper[4904]: I0223 10:28:25.361255 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-fvhqv"] Feb 23 10:28:25 crc kubenswrapper[4904]: I0223 10:28:25.370879 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-fvhqv"] Feb 23 10:28:26 crc kubenswrapper[4904]: I0223 10:28:26.311859 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4cc4234-fb72-4d00-95a6-82a77f062057","Type":"ContainerStarted","Data":"b5b599aed576174db95731bbffd3832d1c970c4c6fea1e6f97a52b23ea3fcd87"} Feb 23 10:28:26 crc kubenswrapper[4904]: I0223 10:28:26.312402 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 10:28:26 crc kubenswrapper[4904]: I0223 10:28:26.344785 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.781211287 podStartE2EDuration="6.344760776s" podCreationTimestamp="2026-02-23 10:28:20 +0000 UTC" firstStartedPulling="2026-02-23 10:28:21.549055732 +0000 UTC m=+1334.969429245" lastFinishedPulling="2026-02-23 10:28:25.112605211 +0000 UTC m=+1338.532978734" observedRunningTime="2026-02-23 10:28:26.332777785 +0000 UTC m=+1339.753151308" watchObservedRunningTime="2026-02-23 10:28:26.344760776 +0000 UTC m=+1339.765134299" Feb 23 10:28:27 crc kubenswrapper[4904]: I0223 10:28:27.273067 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9a78324-b9aa-4d60-86f6-abf52adf1de4" path="/var/lib/kubelet/pods/f9a78324-b9aa-4d60-86f6-abf52adf1de4/volumes" Feb 23 10:28:28 crc kubenswrapper[4904]: I0223 10:28:28.349539 4904 generic.go:334] "Generic (PLEG): container finished" podID="3de5ba4e-08ff-4c74-b42c-037d9f6b8d58" containerID="bccdcb450f4f692b42e8199cdc9e2bc972283ec25a69293ce48f39063a515c86" exitCode=0 Feb 23 10:28:28 crc kubenswrapper[4904]: I0223 10:28:28.349779 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6q59t" event={"ID":"3de5ba4e-08ff-4c74-b42c-037d9f6b8d58","Type":"ContainerDied","Data":"bccdcb450f4f692b42e8199cdc9e2bc972283ec25a69293ce48f39063a515c86"} Feb 23 10:28:29 crc kubenswrapper[4904]: I0223 10:28:29.840633 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6q59t" Feb 23 10:28:29 crc kubenswrapper[4904]: I0223 10:28:29.885245 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58-combined-ca-bundle\") pod \"3de5ba4e-08ff-4c74-b42c-037d9f6b8d58\" (UID: \"3de5ba4e-08ff-4c74-b42c-037d9f6b8d58\") " Feb 23 10:28:29 crc kubenswrapper[4904]: I0223 10:28:29.885399 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58-scripts\") pod \"3de5ba4e-08ff-4c74-b42c-037d9f6b8d58\" (UID: \"3de5ba4e-08ff-4c74-b42c-037d9f6b8d58\") " Feb 23 10:28:29 crc kubenswrapper[4904]: I0223 10:28:29.885559 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58-config-data\") pod \"3de5ba4e-08ff-4c74-b42c-037d9f6b8d58\" (UID: \"3de5ba4e-08ff-4c74-b42c-037d9f6b8d58\") " Feb 23 10:28:29 crc kubenswrapper[4904]: I0223 10:28:29.885759 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kr6h\" (UniqueName: \"kubernetes.io/projected/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58-kube-api-access-5kr6h\") pod \"3de5ba4e-08ff-4c74-b42c-037d9f6b8d58\" (UID: \"3de5ba4e-08ff-4c74-b42c-037d9f6b8d58\") " Feb 23 10:28:29 crc kubenswrapper[4904]: I0223 10:28:29.894134 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58-kube-api-access-5kr6h" (OuterVolumeSpecName: "kube-api-access-5kr6h") pod "3de5ba4e-08ff-4c74-b42c-037d9f6b8d58" (UID: "3de5ba4e-08ff-4c74-b42c-037d9f6b8d58"). InnerVolumeSpecName "kube-api-access-5kr6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:28:29 crc kubenswrapper[4904]: I0223 10:28:29.897544 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58-scripts" (OuterVolumeSpecName: "scripts") pod "3de5ba4e-08ff-4c74-b42c-037d9f6b8d58" (UID: "3de5ba4e-08ff-4c74-b42c-037d9f6b8d58"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:29 crc kubenswrapper[4904]: I0223 10:28:29.945859 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3de5ba4e-08ff-4c74-b42c-037d9f6b8d58" (UID: "3de5ba4e-08ff-4c74-b42c-037d9f6b8d58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:29 crc kubenswrapper[4904]: I0223 10:28:29.950770 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58-config-data" (OuterVolumeSpecName: "config-data") pod "3de5ba4e-08ff-4c74-b42c-037d9f6b8d58" (UID: "3de5ba4e-08ff-4c74-b42c-037d9f6b8d58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:29 crc kubenswrapper[4904]: I0223 10:28:29.994885 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kr6h\" (UniqueName: \"kubernetes.io/projected/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58-kube-api-access-5kr6h\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:29 crc kubenswrapper[4904]: I0223 10:28:29.994932 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:29 crc kubenswrapper[4904]: I0223 10:28:29.994945 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:29 crc kubenswrapper[4904]: I0223 10:28:29.994956 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:30 crc kubenswrapper[4904]: I0223 10:28:30.408541 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6q59t" event={"ID":"3de5ba4e-08ff-4c74-b42c-037d9f6b8d58","Type":"ContainerDied","Data":"15277c12352e149412b27797c17cd4bfd62c9dd33da23eec829a1fd93937eed0"} Feb 23 10:28:30 crc kubenswrapper[4904]: I0223 10:28:30.408592 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15277c12352e149412b27797c17cd4bfd62c9dd33da23eec829a1fd93937eed0" Feb 23 10:28:30 crc kubenswrapper[4904]: I0223 10:28:30.409375 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6q59t" Feb 23 10:28:30 crc kubenswrapper[4904]: I0223 10:28:30.581930 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 10:28:30 crc kubenswrapper[4904]: I0223 10:28:30.582150 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="139994b7-e887-4940-b942-08bdf6b39dc5" containerName="nova-scheduler-scheduler" containerID="cri-o://ab4735d63ad6cfd47f94069128d7d051f43bd7b12757f1546d3ccaa70381b5bc" gracePeriod=30 Feb 23 10:28:30 crc kubenswrapper[4904]: I0223 10:28:30.594770 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 10:28:30 crc kubenswrapper[4904]: I0223 10:28:30.595031 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d3f9b33d-487c-401e-9b53-7d41616549aa" containerName="nova-api-log" containerID="cri-o://013500ee18a2f4e5496258166864532434ad4d7df886c8dd1a9e954130d531a0" gracePeriod=30 Feb 23 10:28:30 crc kubenswrapper[4904]: I0223 10:28:30.595173 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d3f9b33d-487c-401e-9b53-7d41616549aa" containerName="nova-api-api" containerID="cri-o://57dffd3589f114530eb420260b3ea7e878a6cfe13fd0d21299e93e5d82716c6e" gracePeriod=30 Feb 23 10:28:30 crc kubenswrapper[4904]: I0223 10:28:30.636533 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 10:28:30 crc kubenswrapper[4904]: I0223 10:28:30.637096 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="598def1d-4972-40aa-a33b-30481ae4a527" containerName="nova-metadata-log" containerID="cri-o://5b3c1562a8c0be628d687fcd6af440e80b9641de85af31519f48864b8df3f045" gracePeriod=30 Feb 23 10:28:30 crc kubenswrapper[4904]: I0223 10:28:30.637228 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="598def1d-4972-40aa-a33b-30481ae4a527" containerName="nova-metadata-metadata" containerID="cri-o://408911eaec98dd04d2640d5104f00bcd3ec44331591bca1e58224a56258507db" gracePeriod=30 Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.354736 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.420410 4904 generic.go:334] "Generic (PLEG): container finished" podID="598def1d-4972-40aa-a33b-30481ae4a527" containerID="5b3c1562a8c0be628d687fcd6af440e80b9641de85af31519f48864b8df3f045" exitCode=143 Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.420484 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"598def1d-4972-40aa-a33b-30481ae4a527","Type":"ContainerDied","Data":"5b3c1562a8c0be628d687fcd6af440e80b9641de85af31519f48864b8df3f045"} Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.424886 4904 generic.go:334] "Generic (PLEG): container finished" podID="d3f9b33d-487c-401e-9b53-7d41616549aa" containerID="57dffd3589f114530eb420260b3ea7e878a6cfe13fd0d21299e93e5d82716c6e" exitCode=0 Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.425009 4904 generic.go:334] "Generic (PLEG): container finished" podID="d3f9b33d-487c-401e-9b53-7d41616549aa" containerID="013500ee18a2f4e5496258166864532434ad4d7df886c8dd1a9e954130d531a0" exitCode=143 Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.425079 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d3f9b33d-487c-401e-9b53-7d41616549aa","Type":"ContainerDied","Data":"57dffd3589f114530eb420260b3ea7e878a6cfe13fd0d21299e93e5d82716c6e"} Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.425159 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d3f9b33d-487c-401e-9b53-7d41616549aa","Type":"ContainerDied","Data":"013500ee18a2f4e5496258166864532434ad4d7df886c8dd1a9e954130d531a0"} Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.425215 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d3f9b33d-487c-401e-9b53-7d41616549aa","Type":"ContainerDied","Data":"fcf098229a2e9f3d0db04d97b5d5734f651f6f55e82ce084490e64ec38e78682"} Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.425252 4904 scope.go:117] "RemoveContainer" containerID="57dffd3589f114530eb420260b3ea7e878a6cfe13fd0d21299e93e5d82716c6e" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.425449 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.433119 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q6s2\" (UniqueName: \"kubernetes.io/projected/d3f9b33d-487c-401e-9b53-7d41616549aa-kube-api-access-5q6s2\") pod \"d3f9b33d-487c-401e-9b53-7d41616549aa\" (UID: \"d3f9b33d-487c-401e-9b53-7d41616549aa\") " Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.433233 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3f9b33d-487c-401e-9b53-7d41616549aa-logs\") pod \"d3f9b33d-487c-401e-9b53-7d41616549aa\" (UID: \"d3f9b33d-487c-401e-9b53-7d41616549aa\") " Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.433446 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3f9b33d-487c-401e-9b53-7d41616549aa-config-data\") pod \"d3f9b33d-487c-401e-9b53-7d41616549aa\" (UID: \"d3f9b33d-487c-401e-9b53-7d41616549aa\") " Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.433589 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3f9b33d-487c-401e-9b53-7d41616549aa-internal-tls-certs\") pod \"d3f9b33d-487c-401e-9b53-7d41616549aa\" (UID: \"d3f9b33d-487c-401e-9b53-7d41616549aa\") " Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.433628 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3f9b33d-487c-401e-9b53-7d41616549aa-public-tls-certs\") pod \"d3f9b33d-487c-401e-9b53-7d41616549aa\" (UID: \"d3f9b33d-487c-401e-9b53-7d41616549aa\") " Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.433698 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3f9b33d-487c-401e-9b53-7d41616549aa-combined-ca-bundle\") pod \"d3f9b33d-487c-401e-9b53-7d41616549aa\" (UID: \"d3f9b33d-487c-401e-9b53-7d41616549aa\") " Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.435174 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3f9b33d-487c-401e-9b53-7d41616549aa-logs" (OuterVolumeSpecName: "logs") pod "d3f9b33d-487c-401e-9b53-7d41616549aa" (UID: "d3f9b33d-487c-401e-9b53-7d41616549aa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.440988 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3f9b33d-487c-401e-9b53-7d41616549aa-kube-api-access-5q6s2" (OuterVolumeSpecName: "kube-api-access-5q6s2") pod "d3f9b33d-487c-401e-9b53-7d41616549aa" (UID: "d3f9b33d-487c-401e-9b53-7d41616549aa"). InnerVolumeSpecName "kube-api-access-5q6s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.461090 4904 scope.go:117] "RemoveContainer" containerID="013500ee18a2f4e5496258166864532434ad4d7df886c8dd1a9e954130d531a0" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.468627 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3f9b33d-487c-401e-9b53-7d41616549aa-config-data" (OuterVolumeSpecName: "config-data") pod "d3f9b33d-487c-401e-9b53-7d41616549aa" (UID: "d3f9b33d-487c-401e-9b53-7d41616549aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.476179 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3f9b33d-487c-401e-9b53-7d41616549aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3f9b33d-487c-401e-9b53-7d41616549aa" (UID: "d3f9b33d-487c-401e-9b53-7d41616549aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.482841 4904 scope.go:117] "RemoveContainer" containerID="57dffd3589f114530eb420260b3ea7e878a6cfe13fd0d21299e93e5d82716c6e" Feb 23 10:28:31 crc kubenswrapper[4904]: E0223 10:28:31.483347 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57dffd3589f114530eb420260b3ea7e878a6cfe13fd0d21299e93e5d82716c6e\": container with ID starting with 57dffd3589f114530eb420260b3ea7e878a6cfe13fd0d21299e93e5d82716c6e not found: ID does not exist" containerID="57dffd3589f114530eb420260b3ea7e878a6cfe13fd0d21299e93e5d82716c6e" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.483378 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57dffd3589f114530eb420260b3ea7e878a6cfe13fd0d21299e93e5d82716c6e"} err="failed to get container status \"57dffd3589f114530eb420260b3ea7e878a6cfe13fd0d21299e93e5d82716c6e\": rpc error: code = NotFound desc = could not find container \"57dffd3589f114530eb420260b3ea7e878a6cfe13fd0d21299e93e5d82716c6e\": container with ID starting with 57dffd3589f114530eb420260b3ea7e878a6cfe13fd0d21299e93e5d82716c6e not found: ID does not exist" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.483401 4904 scope.go:117] "RemoveContainer" containerID="013500ee18a2f4e5496258166864532434ad4d7df886c8dd1a9e954130d531a0" Feb 23 10:28:31 crc kubenswrapper[4904]: E0223 10:28:31.483807 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"013500ee18a2f4e5496258166864532434ad4d7df886c8dd1a9e954130d531a0\": container with ID starting with 013500ee18a2f4e5496258166864532434ad4d7df886c8dd1a9e954130d531a0 not found: ID does not exist" containerID="013500ee18a2f4e5496258166864532434ad4d7df886c8dd1a9e954130d531a0" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.483897 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"013500ee18a2f4e5496258166864532434ad4d7df886c8dd1a9e954130d531a0"} err="failed to get container status \"013500ee18a2f4e5496258166864532434ad4d7df886c8dd1a9e954130d531a0\": rpc error: code = NotFound desc = could not find container \"013500ee18a2f4e5496258166864532434ad4d7df886c8dd1a9e954130d531a0\": container with ID starting with 013500ee18a2f4e5496258166864532434ad4d7df886c8dd1a9e954130d531a0 not found: ID does not exist" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.483971 4904 scope.go:117] "RemoveContainer" containerID="57dffd3589f114530eb420260b3ea7e878a6cfe13fd0d21299e93e5d82716c6e" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.484369 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57dffd3589f114530eb420260b3ea7e878a6cfe13fd0d21299e93e5d82716c6e"} err="failed to get container status \"57dffd3589f114530eb420260b3ea7e878a6cfe13fd0d21299e93e5d82716c6e\": rpc error: code = NotFound desc = could not find container \"57dffd3589f114530eb420260b3ea7e878a6cfe13fd0d21299e93e5d82716c6e\": container with ID starting with 57dffd3589f114530eb420260b3ea7e878a6cfe13fd0d21299e93e5d82716c6e not found: ID does not exist" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.484394 4904 scope.go:117] "RemoveContainer" containerID="013500ee18a2f4e5496258166864532434ad4d7df886c8dd1a9e954130d531a0" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.484672 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"013500ee18a2f4e5496258166864532434ad4d7df886c8dd1a9e954130d531a0"} err="failed to get container status \"013500ee18a2f4e5496258166864532434ad4d7df886c8dd1a9e954130d531a0\": rpc error: code = NotFound desc = could not find container \"013500ee18a2f4e5496258166864532434ad4d7df886c8dd1a9e954130d531a0\": container with ID starting with 013500ee18a2f4e5496258166864532434ad4d7df886c8dd1a9e954130d531a0 not found: ID does not exist" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.492010 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3f9b33d-487c-401e-9b53-7d41616549aa-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d3f9b33d-487c-401e-9b53-7d41616549aa" (UID: "d3f9b33d-487c-401e-9b53-7d41616549aa"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.492077 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3f9b33d-487c-401e-9b53-7d41616549aa-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d3f9b33d-487c-401e-9b53-7d41616549aa" (UID: "d3f9b33d-487c-401e-9b53-7d41616549aa"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.536371 4904 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3f9b33d-487c-401e-9b53-7d41616549aa-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.536409 4904 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3f9b33d-487c-401e-9b53-7d41616549aa-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.536421 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3f9b33d-487c-401e-9b53-7d41616549aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.536431 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q6s2\" (UniqueName: \"kubernetes.io/projected/d3f9b33d-487c-401e-9b53-7d41616549aa-kube-api-access-5q6s2\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.536443 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3f9b33d-487c-401e-9b53-7d41616549aa-logs\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.536458 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3f9b33d-487c-401e-9b53-7d41616549aa-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.835256 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.846918 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.879288 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 23 10:28:31 crc kubenswrapper[4904]: E0223 10:28:31.879834 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3f9b33d-487c-401e-9b53-7d41616549aa" containerName="nova-api-log" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.879853 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3f9b33d-487c-401e-9b53-7d41616549aa" containerName="nova-api-log" Feb 23 10:28:31 crc kubenswrapper[4904]: E0223 10:28:31.879869 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a78324-b9aa-4d60-86f6-abf52adf1de4" containerName="init" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.879876 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a78324-b9aa-4d60-86f6-abf52adf1de4" containerName="init" Feb 23 10:28:31 crc kubenswrapper[4904]: E0223 10:28:31.879897 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3f9b33d-487c-401e-9b53-7d41616549aa" containerName="nova-api-api" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.879904 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3f9b33d-487c-401e-9b53-7d41616549aa" containerName="nova-api-api" Feb 23 10:28:31 crc kubenswrapper[4904]: E0223 10:28:31.879910 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a78324-b9aa-4d60-86f6-abf52adf1de4" containerName="dnsmasq-dns" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.879916 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a78324-b9aa-4d60-86f6-abf52adf1de4" containerName="dnsmasq-dns" Feb 23 10:28:31 crc kubenswrapper[4904]: E0223 10:28:31.879934 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de5ba4e-08ff-4c74-b42c-037d9f6b8d58" containerName="nova-manage" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.879940 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de5ba4e-08ff-4c74-b42c-037d9f6b8d58" containerName="nova-manage" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.880163 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9a78324-b9aa-4d60-86f6-abf52adf1de4" containerName="dnsmasq-dns" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.880186 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3f9b33d-487c-401e-9b53-7d41616549aa" containerName="nova-api-api" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.880195 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3f9b33d-487c-401e-9b53-7d41616549aa" containerName="nova-api-log" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.880204 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="3de5ba4e-08ff-4c74-b42c-037d9f6b8d58" containerName="nova-manage" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.881379 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.884682 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.884887 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.884913 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.895096 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.945553 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f2eaf23-ec01-4da4-ab9b-ce90633dff13-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5f2eaf23-ec01-4da4-ab9b-ce90633dff13\") " pod="openstack/nova-api-0" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.945928 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czgr4\" (UniqueName: \"kubernetes.io/projected/5f2eaf23-ec01-4da4-ab9b-ce90633dff13-kube-api-access-czgr4\") pod \"nova-api-0\" (UID: \"5f2eaf23-ec01-4da4-ab9b-ce90633dff13\") " pod="openstack/nova-api-0" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.946746 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2eaf23-ec01-4da4-ab9b-ce90633dff13-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5f2eaf23-ec01-4da4-ab9b-ce90633dff13\") " pod="openstack/nova-api-0" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.946868 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f2eaf23-ec01-4da4-ab9b-ce90633dff13-public-tls-certs\") pod \"nova-api-0\" (UID: \"5f2eaf23-ec01-4da4-ab9b-ce90633dff13\") " pod="openstack/nova-api-0" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.946947 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2eaf23-ec01-4da4-ab9b-ce90633dff13-config-data\") pod \"nova-api-0\" (UID: \"5f2eaf23-ec01-4da4-ab9b-ce90633dff13\") " pod="openstack/nova-api-0" Feb 23 10:28:31 crc kubenswrapper[4904]: I0223 10:28:31.947197 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f2eaf23-ec01-4da4-ab9b-ce90633dff13-logs\") pod \"nova-api-0\" (UID: \"5f2eaf23-ec01-4da4-ab9b-ce90633dff13\") " pod="openstack/nova-api-0" Feb 23 10:28:32 crc kubenswrapper[4904]: I0223 10:28:32.074800 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f2eaf23-ec01-4da4-ab9b-ce90633dff13-logs\") pod \"nova-api-0\" (UID: \"5f2eaf23-ec01-4da4-ab9b-ce90633dff13\") " pod="openstack/nova-api-0" Feb 23 10:28:32 crc kubenswrapper[4904]: I0223 10:28:32.075025 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f2eaf23-ec01-4da4-ab9b-ce90633dff13-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5f2eaf23-ec01-4da4-ab9b-ce90633dff13\") " pod="openstack/nova-api-0" Feb 23 10:28:32 crc kubenswrapper[4904]: I0223 10:28:32.075078 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czgr4\" (UniqueName: \"kubernetes.io/projected/5f2eaf23-ec01-4da4-ab9b-ce90633dff13-kube-api-access-czgr4\") pod \"nova-api-0\" (UID: \"5f2eaf23-ec01-4da4-ab9b-ce90633dff13\") " pod="openstack/nova-api-0" Feb 23 10:28:32 crc kubenswrapper[4904]: I0223 10:28:32.075169 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2eaf23-ec01-4da4-ab9b-ce90633dff13-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5f2eaf23-ec01-4da4-ab9b-ce90633dff13\") " pod="openstack/nova-api-0" Feb 23 10:28:32 crc kubenswrapper[4904]: I0223 10:28:32.075260 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f2eaf23-ec01-4da4-ab9b-ce90633dff13-public-tls-certs\") pod \"nova-api-0\" (UID: \"5f2eaf23-ec01-4da4-ab9b-ce90633dff13\") " pod="openstack/nova-api-0" Feb 23 10:28:32 crc kubenswrapper[4904]: I0223 10:28:32.075327 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2eaf23-ec01-4da4-ab9b-ce90633dff13-config-data\") pod \"nova-api-0\" (UID: \"5f2eaf23-ec01-4da4-ab9b-ce90633dff13\") " pod="openstack/nova-api-0" Feb 23 10:28:32 crc kubenswrapper[4904]: I0223 10:28:32.075989 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f2eaf23-ec01-4da4-ab9b-ce90633dff13-logs\") pod \"nova-api-0\" (UID: \"5f2eaf23-ec01-4da4-ab9b-ce90633dff13\") " pod="openstack/nova-api-0" Feb 23 10:28:32 crc kubenswrapper[4904]: I0223 10:28:32.090621 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f2eaf23-ec01-4da4-ab9b-ce90633dff13-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5f2eaf23-ec01-4da4-ab9b-ce90633dff13\") " pod="openstack/nova-api-0" Feb 23 10:28:32 crc kubenswrapper[4904]: I0223 10:28:32.091051 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2eaf23-ec01-4da4-ab9b-ce90633dff13-config-data\") pod \"nova-api-0\" (UID: \"5f2eaf23-ec01-4da4-ab9b-ce90633dff13\") " pod="openstack/nova-api-0" Feb 23 10:28:32 crc kubenswrapper[4904]: I0223 10:28:32.102475 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2eaf23-ec01-4da4-ab9b-ce90633dff13-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5f2eaf23-ec01-4da4-ab9b-ce90633dff13\") " pod="openstack/nova-api-0" Feb 23 10:28:32 crc kubenswrapper[4904]: I0223 10:28:32.113510 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f2eaf23-ec01-4da4-ab9b-ce90633dff13-public-tls-certs\") pod \"nova-api-0\" (UID: \"5f2eaf23-ec01-4da4-ab9b-ce90633dff13\") " pod="openstack/nova-api-0" Feb 23 10:28:32 crc kubenswrapper[4904]: E0223 10:28:32.123261 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab4735d63ad6cfd47f94069128d7d051f43bd7b12757f1546d3ccaa70381b5bc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 10:28:32 crc kubenswrapper[4904]: I0223 10:28:32.132900 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czgr4\" (UniqueName: \"kubernetes.io/projected/5f2eaf23-ec01-4da4-ab9b-ce90633dff13-kube-api-access-czgr4\") pod \"nova-api-0\" (UID: \"5f2eaf23-ec01-4da4-ab9b-ce90633dff13\") " pod="openstack/nova-api-0" Feb 23 10:28:32 crc kubenswrapper[4904]: E0223 10:28:32.144987 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab4735d63ad6cfd47f94069128d7d051f43bd7b12757f1546d3ccaa70381b5bc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 10:28:32 crc kubenswrapper[4904]: E0223 10:28:32.176842 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab4735d63ad6cfd47f94069128d7d051f43bd7b12757f1546d3ccaa70381b5bc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 10:28:32 crc kubenswrapper[4904]: E0223 10:28:32.176926 4904 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="139994b7-e887-4940-b942-08bdf6b39dc5" containerName="nova-scheduler-scheduler" Feb 23 10:28:32 crc kubenswrapper[4904]: I0223 10:28:32.223382 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 10:28:32 crc kubenswrapper[4904]: I0223 10:28:32.738563 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 10:28:32 crc kubenswrapper[4904]: W0223 10:28:32.743987 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f2eaf23_ec01_4da4_ab9b_ce90633dff13.slice/crio-c2984d6a3cb0df713eed2cb4122ab0efeeda3ecd37a2bdcafa83e9924cf95cd8 WatchSource:0}: Error finding container c2984d6a3cb0df713eed2cb4122ab0efeeda3ecd37a2bdcafa83e9924cf95cd8: Status 404 returned error can't find the container with id c2984d6a3cb0df713eed2cb4122ab0efeeda3ecd37a2bdcafa83e9924cf95cd8 Feb 23 10:28:33 crc kubenswrapper[4904]: I0223 10:28:33.269734 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3f9b33d-487c-401e-9b53-7d41616549aa" path="/var/lib/kubelet/pods/d3f9b33d-487c-401e-9b53-7d41616549aa/volumes" Feb 23 10:28:33 crc kubenswrapper[4904]: I0223 10:28:33.450898 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5f2eaf23-ec01-4da4-ab9b-ce90633dff13","Type":"ContainerStarted","Data":"8df257239aa068b4bf20d78af978d45ecabe752f3cbc79c12c5b5fa2ba23ee8d"} Feb 23 10:28:33 crc kubenswrapper[4904]: I0223 10:28:33.450955 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5f2eaf23-ec01-4da4-ab9b-ce90633dff13","Type":"ContainerStarted","Data":"7a339007f7bd84addc127f304362fb07777bb1b3cdc8d9a1b982ccf69ec5208b"} Feb 23 10:28:33 crc kubenswrapper[4904]: I0223 10:28:33.450969 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5f2eaf23-ec01-4da4-ab9b-ce90633dff13","Type":"ContainerStarted","Data":"c2984d6a3cb0df713eed2cb4122ab0efeeda3ecd37a2bdcafa83e9924cf95cd8"} Feb 23 10:28:33 crc kubenswrapper[4904]: I0223 10:28:33.480619 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.480592195 podStartE2EDuration="2.480592195s" podCreationTimestamp="2026-02-23 10:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:28:33.478222537 +0000 UTC m=+1346.898596070" watchObservedRunningTime="2026-02-23 10:28:33.480592195 +0000 UTC m=+1346.900965708" Feb 23 10:28:33 crc kubenswrapper[4904]: I0223 10:28:33.784832 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="598def1d-4972-40aa-a33b-30481ae4a527" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": read tcp 10.217.0.2:50574->10.217.0.215:8775: read: connection reset by peer" Feb 23 10:28:33 crc kubenswrapper[4904]: I0223 10:28:33.784921 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="598def1d-4972-40aa-a33b-30481ae4a527" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": read tcp 10.217.0.2:50576->10.217.0.215:8775: read: connection reset by peer" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.279200 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.334161 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/598def1d-4972-40aa-a33b-30481ae4a527-nova-metadata-tls-certs\") pod \"598def1d-4972-40aa-a33b-30481ae4a527\" (UID: \"598def1d-4972-40aa-a33b-30481ae4a527\") " Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.334232 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-596tr\" (UniqueName: \"kubernetes.io/projected/598def1d-4972-40aa-a33b-30481ae4a527-kube-api-access-596tr\") pod \"598def1d-4972-40aa-a33b-30481ae4a527\" (UID: \"598def1d-4972-40aa-a33b-30481ae4a527\") " Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.334284 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598def1d-4972-40aa-a33b-30481ae4a527-combined-ca-bundle\") pod \"598def1d-4972-40aa-a33b-30481ae4a527\" (UID: \"598def1d-4972-40aa-a33b-30481ae4a527\") " Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.334375 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598def1d-4972-40aa-a33b-30481ae4a527-config-data\") pod \"598def1d-4972-40aa-a33b-30481ae4a527\" (UID: \"598def1d-4972-40aa-a33b-30481ae4a527\") " Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.334457 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/598def1d-4972-40aa-a33b-30481ae4a527-logs\") pod \"598def1d-4972-40aa-a33b-30481ae4a527\" (UID: \"598def1d-4972-40aa-a33b-30481ae4a527\") " Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.335102 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/598def1d-4972-40aa-a33b-30481ae4a527-logs" (OuterVolumeSpecName: "logs") pod "598def1d-4972-40aa-a33b-30481ae4a527" (UID: "598def1d-4972-40aa-a33b-30481ae4a527"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.344029 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/598def1d-4972-40aa-a33b-30481ae4a527-kube-api-access-596tr" (OuterVolumeSpecName: "kube-api-access-596tr") pod "598def1d-4972-40aa-a33b-30481ae4a527" (UID: "598def1d-4972-40aa-a33b-30481ae4a527"). InnerVolumeSpecName "kube-api-access-596tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.381545 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/598def1d-4972-40aa-a33b-30481ae4a527-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "598def1d-4972-40aa-a33b-30481ae4a527" (UID: "598def1d-4972-40aa-a33b-30481ae4a527"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.382176 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/598def1d-4972-40aa-a33b-30481ae4a527-config-data" (OuterVolumeSpecName: "config-data") pod "598def1d-4972-40aa-a33b-30481ae4a527" (UID: "598def1d-4972-40aa-a33b-30481ae4a527"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.436550 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/598def1d-4972-40aa-a33b-30481ae4a527-logs\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.436579 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-596tr\" (UniqueName: \"kubernetes.io/projected/598def1d-4972-40aa-a33b-30481ae4a527-kube-api-access-596tr\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.436590 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598def1d-4972-40aa-a33b-30481ae4a527-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.436600 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598def1d-4972-40aa-a33b-30481ae4a527-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.447521 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/598def1d-4972-40aa-a33b-30481ae4a527-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "598def1d-4972-40aa-a33b-30481ae4a527" (UID: "598def1d-4972-40aa-a33b-30481ae4a527"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.473794 4904 generic.go:334] "Generic (PLEG): container finished" podID="598def1d-4972-40aa-a33b-30481ae4a527" containerID="408911eaec98dd04d2640d5104f00bcd3ec44331591bca1e58224a56258507db" exitCode=0 Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.474846 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.476653 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"598def1d-4972-40aa-a33b-30481ae4a527","Type":"ContainerDied","Data":"408911eaec98dd04d2640d5104f00bcd3ec44331591bca1e58224a56258507db"} Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.476736 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"598def1d-4972-40aa-a33b-30481ae4a527","Type":"ContainerDied","Data":"33b258a8a3fb41ff06b32a791adc53b2917be3293f41d2a36eba863bb8556170"} Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.476761 4904 scope.go:117] "RemoveContainer" containerID="408911eaec98dd04d2640d5104f00bcd3ec44331591bca1e58224a56258507db" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.513703 4904 scope.go:117] "RemoveContainer" containerID="5b3c1562a8c0be628d687fcd6af440e80b9641de85af31519f48864b8df3f045" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.530061 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.540193 4904 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/598def1d-4972-40aa-a33b-30481ae4a527-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.544987 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.548337 4904 scope.go:117] "RemoveContainer" containerID="408911eaec98dd04d2640d5104f00bcd3ec44331591bca1e58224a56258507db" Feb 23 10:28:34 crc kubenswrapper[4904]: E0223 10:28:34.549051 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"408911eaec98dd04d2640d5104f00bcd3ec44331591bca1e58224a56258507db\": container with ID starting with 408911eaec98dd04d2640d5104f00bcd3ec44331591bca1e58224a56258507db not found: ID does not exist" containerID="408911eaec98dd04d2640d5104f00bcd3ec44331591bca1e58224a56258507db" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.549093 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"408911eaec98dd04d2640d5104f00bcd3ec44331591bca1e58224a56258507db"} err="failed to get container status \"408911eaec98dd04d2640d5104f00bcd3ec44331591bca1e58224a56258507db\": rpc error: code = NotFound desc = could not find container \"408911eaec98dd04d2640d5104f00bcd3ec44331591bca1e58224a56258507db\": container with ID starting with 408911eaec98dd04d2640d5104f00bcd3ec44331591bca1e58224a56258507db not found: ID does not exist" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.549121 4904 scope.go:117] "RemoveContainer" containerID="5b3c1562a8c0be628d687fcd6af440e80b9641de85af31519f48864b8df3f045" Feb 23 10:28:34 crc kubenswrapper[4904]: E0223 10:28:34.549440 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b3c1562a8c0be628d687fcd6af440e80b9641de85af31519f48864b8df3f045\": container with ID starting with 5b3c1562a8c0be628d687fcd6af440e80b9641de85af31519f48864b8df3f045 not found: ID does not exist" containerID="5b3c1562a8c0be628d687fcd6af440e80b9641de85af31519f48864b8df3f045" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.549466 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3c1562a8c0be628d687fcd6af440e80b9641de85af31519f48864b8df3f045"} err="failed to get container status \"5b3c1562a8c0be628d687fcd6af440e80b9641de85af31519f48864b8df3f045\": rpc error: code = NotFound desc = could not find container \"5b3c1562a8c0be628d687fcd6af440e80b9641de85af31519f48864b8df3f045\": container with ID starting with 5b3c1562a8c0be628d687fcd6af440e80b9641de85af31519f48864b8df3f045 not found: ID does not exist" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.568869 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 23 10:28:34 crc kubenswrapper[4904]: E0223 10:28:34.569456 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598def1d-4972-40aa-a33b-30481ae4a527" containerName="nova-metadata-log" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.569480 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="598def1d-4972-40aa-a33b-30481ae4a527" containerName="nova-metadata-log" Feb 23 10:28:34 crc kubenswrapper[4904]: E0223 10:28:34.569505 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598def1d-4972-40aa-a33b-30481ae4a527" containerName="nova-metadata-metadata" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.569513 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="598def1d-4972-40aa-a33b-30481ae4a527" containerName="nova-metadata-metadata" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.569744 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="598def1d-4972-40aa-a33b-30481ae4a527" containerName="nova-metadata-metadata" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.569764 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="598def1d-4972-40aa-a33b-30481ae4a527" containerName="nova-metadata-log" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.570958 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.574407 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.574756 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.594359 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.642115 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr6s7\" (UniqueName: \"kubernetes.io/projected/018a67a3-d954-4259-9c05-298dad7d5e9d-kube-api-access-gr6s7\") pod \"nova-metadata-0\" (UID: \"018a67a3-d954-4259-9c05-298dad7d5e9d\") " pod="openstack/nova-metadata-0" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.642236 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018a67a3-d954-4259-9c05-298dad7d5e9d-config-data\") pod \"nova-metadata-0\" (UID: \"018a67a3-d954-4259-9c05-298dad7d5e9d\") " pod="openstack/nova-metadata-0" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.642260 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/018a67a3-d954-4259-9c05-298dad7d5e9d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"018a67a3-d954-4259-9c05-298dad7d5e9d\") " pod="openstack/nova-metadata-0" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.642390 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018a67a3-d954-4259-9c05-298dad7d5e9d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"018a67a3-d954-4259-9c05-298dad7d5e9d\") " pod="openstack/nova-metadata-0" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.642479 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/018a67a3-d954-4259-9c05-298dad7d5e9d-logs\") pod \"nova-metadata-0\" (UID: \"018a67a3-d954-4259-9c05-298dad7d5e9d\") " pod="openstack/nova-metadata-0" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.744933 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018a67a3-d954-4259-9c05-298dad7d5e9d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"018a67a3-d954-4259-9c05-298dad7d5e9d\") " pod="openstack/nova-metadata-0" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.745003 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/018a67a3-d954-4259-9c05-298dad7d5e9d-logs\") pod \"nova-metadata-0\" (UID: \"018a67a3-d954-4259-9c05-298dad7d5e9d\") " pod="openstack/nova-metadata-0" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.745167 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr6s7\" (UniqueName: \"kubernetes.io/projected/018a67a3-d954-4259-9c05-298dad7d5e9d-kube-api-access-gr6s7\") pod \"nova-metadata-0\" (UID: \"018a67a3-d954-4259-9c05-298dad7d5e9d\") " pod="openstack/nova-metadata-0" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.745308 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018a67a3-d954-4259-9c05-298dad7d5e9d-config-data\") pod \"nova-metadata-0\" (UID: \"018a67a3-d954-4259-9c05-298dad7d5e9d\") " pod="openstack/nova-metadata-0" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.745373 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/018a67a3-d954-4259-9c05-298dad7d5e9d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"018a67a3-d954-4259-9c05-298dad7d5e9d\") " pod="openstack/nova-metadata-0" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.745576 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/018a67a3-d954-4259-9c05-298dad7d5e9d-logs\") pod \"nova-metadata-0\" (UID: \"018a67a3-d954-4259-9c05-298dad7d5e9d\") " pod="openstack/nova-metadata-0" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.749038 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018a67a3-d954-4259-9c05-298dad7d5e9d-config-data\") pod \"nova-metadata-0\" (UID: \"018a67a3-d954-4259-9c05-298dad7d5e9d\") " pod="openstack/nova-metadata-0" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.749048 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018a67a3-d954-4259-9c05-298dad7d5e9d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"018a67a3-d954-4259-9c05-298dad7d5e9d\") " pod="openstack/nova-metadata-0" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.749355 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/018a67a3-d954-4259-9c05-298dad7d5e9d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"018a67a3-d954-4259-9c05-298dad7d5e9d\") " pod="openstack/nova-metadata-0" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.762055 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr6s7\" (UniqueName: \"kubernetes.io/projected/018a67a3-d954-4259-9c05-298dad7d5e9d-kube-api-access-gr6s7\") pod \"nova-metadata-0\" (UID: \"018a67a3-d954-4259-9c05-298dad7d5e9d\") " pod="openstack/nova-metadata-0" Feb 23 10:28:34 crc kubenswrapper[4904]: I0223 10:28:34.913539 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 10:28:35 crc kubenswrapper[4904]: I0223 10:28:35.276687 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="598def1d-4972-40aa-a33b-30481ae4a527" path="/var/lib/kubelet/pods/598def1d-4972-40aa-a33b-30481ae4a527/volumes" Feb 23 10:28:35 crc kubenswrapper[4904]: I0223 10:28:35.434067 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 10:28:35 crc kubenswrapper[4904]: W0223 10:28:35.460364 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod018a67a3_d954_4259_9c05_298dad7d5e9d.slice/crio-dfa00629a9a2c4c51b21957f73d0258df881d48ba2d70a7cf13299625af4b3f6 WatchSource:0}: Error finding container dfa00629a9a2c4c51b21957f73d0258df881d48ba2d70a7cf13299625af4b3f6: Status 404 returned error can't find the container with id dfa00629a9a2c4c51b21957f73d0258df881d48ba2d70a7cf13299625af4b3f6 Feb 23 10:28:35 crc kubenswrapper[4904]: I0223 10:28:35.529609 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"018a67a3-d954-4259-9c05-298dad7d5e9d","Type":"ContainerStarted","Data":"dfa00629a9a2c4c51b21957f73d0258df881d48ba2d70a7cf13299625af4b3f6"} Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.484651 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.542909 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"018a67a3-d954-4259-9c05-298dad7d5e9d","Type":"ContainerStarted","Data":"3ecb04413d9808aaec067cf85037ebb4ac7dc8328be6437d81950010657aaf15"} Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.543075 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"018a67a3-d954-4259-9c05-298dad7d5e9d","Type":"ContainerStarted","Data":"a6e44428bd12b91fd7f97df3555be847f19754988cb114e9f111746b452e081f"} Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.548372 4904 generic.go:334] "Generic (PLEG): container finished" podID="139994b7-e887-4940-b942-08bdf6b39dc5" containerID="ab4735d63ad6cfd47f94069128d7d051f43bd7b12757f1546d3ccaa70381b5bc" exitCode=0 Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.548436 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"139994b7-e887-4940-b942-08bdf6b39dc5","Type":"ContainerDied","Data":"ab4735d63ad6cfd47f94069128d7d051f43bd7b12757f1546d3ccaa70381b5bc"} Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.548474 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"139994b7-e887-4940-b942-08bdf6b39dc5","Type":"ContainerDied","Data":"1bd337df6cdbc922bda7cb6164437efd2fa9bad9c07bda7f1ea50d0107e0ba95"} Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.548499 4904 scope.go:117] "RemoveContainer" containerID="ab4735d63ad6cfd47f94069128d7d051f43bd7b12757f1546d3ccaa70381b5bc" Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.548540 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.582307 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.58228702 podStartE2EDuration="2.58228702s" podCreationTimestamp="2026-02-23 10:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:28:36.573237602 +0000 UTC m=+1349.993611135" watchObservedRunningTime="2026-02-23 10:28:36.58228702 +0000 UTC m=+1350.002660533" Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.583979 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/139994b7-e887-4940-b942-08bdf6b39dc5-config-data\") pod \"139994b7-e887-4940-b942-08bdf6b39dc5\" (UID: \"139994b7-e887-4940-b942-08bdf6b39dc5\") " Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.584179 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/139994b7-e887-4940-b942-08bdf6b39dc5-combined-ca-bundle\") pod \"139994b7-e887-4940-b942-08bdf6b39dc5\" (UID: \"139994b7-e887-4940-b942-08bdf6b39dc5\") " Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.584362 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v5rq\" (UniqueName: \"kubernetes.io/projected/139994b7-e887-4940-b942-08bdf6b39dc5-kube-api-access-4v5rq\") pod \"139994b7-e887-4940-b942-08bdf6b39dc5\" (UID: \"139994b7-e887-4940-b942-08bdf6b39dc5\") " Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.589725 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/139994b7-e887-4940-b942-08bdf6b39dc5-kube-api-access-4v5rq" (OuterVolumeSpecName: "kube-api-access-4v5rq") pod "139994b7-e887-4940-b942-08bdf6b39dc5" (UID: "139994b7-e887-4940-b942-08bdf6b39dc5"). InnerVolumeSpecName "kube-api-access-4v5rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.591264 4904 scope.go:117] "RemoveContainer" containerID="ab4735d63ad6cfd47f94069128d7d051f43bd7b12757f1546d3ccaa70381b5bc" Feb 23 10:28:36 crc kubenswrapper[4904]: E0223 10:28:36.591774 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab4735d63ad6cfd47f94069128d7d051f43bd7b12757f1546d3ccaa70381b5bc\": container with ID starting with ab4735d63ad6cfd47f94069128d7d051f43bd7b12757f1546d3ccaa70381b5bc not found: ID does not exist" containerID="ab4735d63ad6cfd47f94069128d7d051f43bd7b12757f1546d3ccaa70381b5bc" Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.591842 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab4735d63ad6cfd47f94069128d7d051f43bd7b12757f1546d3ccaa70381b5bc"} err="failed to get container status \"ab4735d63ad6cfd47f94069128d7d051f43bd7b12757f1546d3ccaa70381b5bc\": rpc error: code = NotFound desc = could not find container \"ab4735d63ad6cfd47f94069128d7d051f43bd7b12757f1546d3ccaa70381b5bc\": container with ID starting with ab4735d63ad6cfd47f94069128d7d051f43bd7b12757f1546d3ccaa70381b5bc not found: ID does not exist" Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.614917 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/139994b7-e887-4940-b942-08bdf6b39dc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "139994b7-e887-4940-b942-08bdf6b39dc5" (UID: "139994b7-e887-4940-b942-08bdf6b39dc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.624963 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/139994b7-e887-4940-b942-08bdf6b39dc5-config-data" (OuterVolumeSpecName: "config-data") pod "139994b7-e887-4940-b942-08bdf6b39dc5" (UID: "139994b7-e887-4940-b942-08bdf6b39dc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.688501 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/139994b7-e887-4940-b942-08bdf6b39dc5-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.688539 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/139994b7-e887-4940-b942-08bdf6b39dc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.688552 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v5rq\" (UniqueName: \"kubernetes.io/projected/139994b7-e887-4940-b942-08bdf6b39dc5-kube-api-access-4v5rq\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.951854 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.962646 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.975367 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 10:28:36 crc kubenswrapper[4904]: E0223 10:28:36.976255 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="139994b7-e887-4940-b942-08bdf6b39dc5" containerName="nova-scheduler-scheduler" Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.976287 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="139994b7-e887-4940-b942-08bdf6b39dc5" containerName="nova-scheduler-scheduler" Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.976629 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="139994b7-e887-4940-b942-08bdf6b39dc5" containerName="nova-scheduler-scheduler" Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.977829 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.981609 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 23 10:28:36 crc kubenswrapper[4904]: I0223 10:28:36.986825 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 10:28:37 crc kubenswrapper[4904]: I0223 10:28:37.096152 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b9599c4-3c50-48f8-b978-0628ef4f799c-config-data\") pod \"nova-scheduler-0\" (UID: \"1b9599c4-3c50-48f8-b978-0628ef4f799c\") " pod="openstack/nova-scheduler-0" Feb 23 10:28:37 crc kubenswrapper[4904]: I0223 10:28:37.096200 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsjzk\" (UniqueName: \"kubernetes.io/projected/1b9599c4-3c50-48f8-b978-0628ef4f799c-kube-api-access-fsjzk\") pod \"nova-scheduler-0\" (UID: \"1b9599c4-3c50-48f8-b978-0628ef4f799c\") " pod="openstack/nova-scheduler-0" Feb 23 10:28:37 crc kubenswrapper[4904]: I0223 10:28:37.096357 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b9599c4-3c50-48f8-b978-0628ef4f799c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b9599c4-3c50-48f8-b978-0628ef4f799c\") " pod="openstack/nova-scheduler-0" Feb 23 10:28:37 crc kubenswrapper[4904]: I0223 10:28:37.198189 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b9599c4-3c50-48f8-b978-0628ef4f799c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b9599c4-3c50-48f8-b978-0628ef4f799c\") " pod="openstack/nova-scheduler-0" Feb 23 10:28:37 crc kubenswrapper[4904]: I0223 10:28:37.198374 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b9599c4-3c50-48f8-b978-0628ef4f799c-config-data\") pod \"nova-scheduler-0\" (UID: \"1b9599c4-3c50-48f8-b978-0628ef4f799c\") " pod="openstack/nova-scheduler-0" Feb 23 10:28:37 crc kubenswrapper[4904]: I0223 10:28:37.198419 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsjzk\" (UniqueName: \"kubernetes.io/projected/1b9599c4-3c50-48f8-b978-0628ef4f799c-kube-api-access-fsjzk\") pod \"nova-scheduler-0\" (UID: \"1b9599c4-3c50-48f8-b978-0628ef4f799c\") " pod="openstack/nova-scheduler-0" Feb 23 10:28:37 crc kubenswrapper[4904]: I0223 10:28:37.207394 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b9599c4-3c50-48f8-b978-0628ef4f799c-config-data\") pod \"nova-scheduler-0\" (UID: \"1b9599c4-3c50-48f8-b978-0628ef4f799c\") " pod="openstack/nova-scheduler-0" Feb 23 10:28:37 crc kubenswrapper[4904]: I0223 10:28:37.207841 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b9599c4-3c50-48f8-b978-0628ef4f799c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b9599c4-3c50-48f8-b978-0628ef4f799c\") " pod="openstack/nova-scheduler-0" Feb 23 10:28:37 crc kubenswrapper[4904]: I0223 10:28:37.221066 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsjzk\" (UniqueName: \"kubernetes.io/projected/1b9599c4-3c50-48f8-b978-0628ef4f799c-kube-api-access-fsjzk\") pod \"nova-scheduler-0\" (UID: \"1b9599c4-3c50-48f8-b978-0628ef4f799c\") " pod="openstack/nova-scheduler-0" Feb 23 10:28:37 crc kubenswrapper[4904]: I0223 10:28:37.277447 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="139994b7-e887-4940-b942-08bdf6b39dc5" path="/var/lib/kubelet/pods/139994b7-e887-4940-b942-08bdf6b39dc5/volumes" Feb 23 10:28:37 crc kubenswrapper[4904]: I0223 10:28:37.300416 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 10:28:37 crc kubenswrapper[4904]: I0223 10:28:37.815958 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 10:28:38 crc kubenswrapper[4904]: I0223 10:28:38.588018 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b9599c4-3c50-48f8-b978-0628ef4f799c","Type":"ContainerStarted","Data":"9097670c2ca303bee0c0da831a42fb0debcc38ebc8178de3e3787be8174e07fd"} Feb 23 10:28:38 crc kubenswrapper[4904]: I0223 10:28:38.588585 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b9599c4-3c50-48f8-b978-0628ef4f799c","Type":"ContainerStarted","Data":"0874113f5960af12a9d1f5de0e3edfdfceac5535751584232315b94171f30922"} Feb 23 10:28:38 crc kubenswrapper[4904]: I0223 10:28:38.628510 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.628488962 podStartE2EDuration="2.628488962s" podCreationTimestamp="2026-02-23 10:28:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:28:38.607105643 +0000 UTC m=+1352.027479206" watchObservedRunningTime="2026-02-23 10:28:38.628488962 +0000 UTC m=+1352.048862475" Feb 23 10:28:39 crc kubenswrapper[4904]: I0223 10:28:39.914553 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 10:28:39 crc kubenswrapper[4904]: I0223 10:28:39.915040 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 10:28:42 crc kubenswrapper[4904]: I0223 10:28:42.224660 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 10:28:42 crc kubenswrapper[4904]: I0223 10:28:42.224748 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 10:28:42 crc kubenswrapper[4904]: I0223 10:28:42.301461 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 23 10:28:43 crc kubenswrapper[4904]: I0223 10:28:43.244938 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5f2eaf23-ec01-4da4-ab9b-ce90633dff13" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 10:28:43 crc kubenswrapper[4904]: I0223 10:28:43.244959 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5f2eaf23-ec01-4da4-ab9b-ce90633dff13" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 10:28:44 crc kubenswrapper[4904]: I0223 10:28:44.914345 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 23 10:28:44 crc kubenswrapper[4904]: I0223 10:28:44.916254 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 23 10:28:45 crc kubenswrapper[4904]: I0223 10:28:45.932991 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="018a67a3-d954-4259-9c05-298dad7d5e9d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.226:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 10:28:45 crc kubenswrapper[4904]: I0223 10:28:45.933535 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="018a67a3-d954-4259-9c05-298dad7d5e9d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.226:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 10:28:47 crc kubenswrapper[4904]: I0223 10:28:47.300999 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 23 10:28:47 crc kubenswrapper[4904]: I0223 10:28:47.343695 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 23 10:28:47 crc kubenswrapper[4904]: I0223 10:28:47.397613 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:28:47 crc kubenswrapper[4904]: I0223 10:28:47.397685 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:28:47 crc kubenswrapper[4904]: I0223 10:28:47.755873 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 23 10:28:50 crc kubenswrapper[4904]: I0223 10:28:50.982648 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 23 10:28:52 crc kubenswrapper[4904]: I0223 10:28:52.230661 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 23 10:28:52 crc kubenswrapper[4904]: I0223 10:28:52.231320 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 23 10:28:52 crc kubenswrapper[4904]: I0223 10:28:52.231379 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 23 10:28:52 crc kubenswrapper[4904]: I0223 10:28:52.242325 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 23 10:28:52 crc kubenswrapper[4904]: I0223 10:28:52.822095 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 23 10:28:52 crc kubenswrapper[4904]: I0223 10:28:52.830495 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 23 10:28:54 crc kubenswrapper[4904]: I0223 10:28:54.920554 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 23 10:28:54 crc kubenswrapper[4904]: I0223 10:28:54.922099 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 23 10:28:54 crc kubenswrapper[4904]: I0223 10:28:54.932835 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 23 10:28:56 crc kubenswrapper[4904]: I0223 10:28:56.040220 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 23 10:29:04 crc kubenswrapper[4904]: I0223 10:29:04.067461 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 10:29:05 crc kubenswrapper[4904]: I0223 10:29:05.073408 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 10:29:08 crc kubenswrapper[4904]: I0223 10:29:08.201319 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="670153e4-0ac6-4ae8-ab14-08a3f2537c6c" containerName="rabbitmq" containerID="cri-o://2331cfff666a7b7cb9c388cc12b784999c7f50d807279a12dc2a25f1d62ba33f" gracePeriod=604796 Feb 23 10:29:09 crc kubenswrapper[4904]: I0223 10:29:09.285655 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="e626c7f2-db46-4757-bd05-eedfba7b5fc8" containerName="rabbitmq" containerID="cri-o://efebd95eafc9c0b73a2d7e20abcfc5347c62e16393ceeb0a2f2d64cf8feccba2" gracePeriod=604796 Feb 23 10:29:12 crc kubenswrapper[4904]: I0223 10:29:12.976407 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="670153e4-0ac6-4ae8-ab14-08a3f2537c6c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Feb 23 10:29:13 crc kubenswrapper[4904]: I0223 10:29:13.358519 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="e626c7f2-db46-4757-bd05-eedfba7b5fc8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Feb 23 10:29:14 crc kubenswrapper[4904]: I0223 10:29:14.842413 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 10:29:14 crc kubenswrapper[4904]: I0223 10:29:14.930959 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-config-data\") pod \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " Feb 23 10:29:14 crc kubenswrapper[4904]: I0223 10:29:14.931420 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-rabbitmq-confd\") pod \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " Feb 23 10:29:14 crc kubenswrapper[4904]: I0223 10:29:14.931595 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-pod-info\") pod \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " Feb 23 10:29:14 crc kubenswrapper[4904]: I0223 10:29:14.931789 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-erlang-cookie-secret\") pod \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " Feb 23 10:29:14 crc kubenswrapper[4904]: I0223 10:29:14.931918 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-server-conf\") pod \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " Feb 23 10:29:14 crc kubenswrapper[4904]: I0223 10:29:14.932064 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-rabbitmq-erlang-cookie\") pod \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " Feb 23 10:29:14 crc kubenswrapper[4904]: I0223 10:29:14.932169 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-plugins-conf\") pod \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " Feb 23 10:29:14 crc kubenswrapper[4904]: I0223 10:29:14.932277 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-rabbitmq-plugins\") pod \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " Feb 23 10:29:14 crc kubenswrapper[4904]: I0223 10:29:14.932391 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " Feb 23 10:29:14 crc kubenswrapper[4904]: I0223 10:29:14.932496 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb7bl\" (UniqueName: \"kubernetes.io/projected/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-kube-api-access-bb7bl\") pod \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " Feb 23 10:29:14 crc kubenswrapper[4904]: I0223 10:29:14.932777 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-rabbitmq-tls\") pod \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\" (UID: \"670153e4-0ac6-4ae8-ab14-08a3f2537c6c\") " Feb 23 10:29:14 crc kubenswrapper[4904]: I0223 10:29:14.933197 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "670153e4-0ac6-4ae8-ab14-08a3f2537c6c" (UID: "670153e4-0ac6-4ae8-ab14-08a3f2537c6c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:29:14 crc kubenswrapper[4904]: I0223 10:29:14.934565 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "670153e4-0ac6-4ae8-ab14-08a3f2537c6c" (UID: "670153e4-0ac6-4ae8-ab14-08a3f2537c6c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:29:14 crc kubenswrapper[4904]: I0223 10:29:14.934857 4904 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:14 crc kubenswrapper[4904]: I0223 10:29:14.934919 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "670153e4-0ac6-4ae8-ab14-08a3f2537c6c" (UID: "670153e4-0ac6-4ae8-ab14-08a3f2537c6c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:29:14 crc kubenswrapper[4904]: I0223 10:29:14.939906 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-kube-api-access-bb7bl" (OuterVolumeSpecName: "kube-api-access-bb7bl") pod "670153e4-0ac6-4ae8-ab14-08a3f2537c6c" (UID: "670153e4-0ac6-4ae8-ab14-08a3f2537c6c"). InnerVolumeSpecName "kube-api-access-bb7bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:29:14 crc kubenswrapper[4904]: I0223 10:29:14.954908 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-pod-info" (OuterVolumeSpecName: "pod-info") pod "670153e4-0ac6-4ae8-ab14-08a3f2537c6c" (UID: "670153e4-0ac6-4ae8-ab14-08a3f2537c6c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 23 10:29:14 crc kubenswrapper[4904]: I0223 10:29:14.959024 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "670153e4-0ac6-4ae8-ab14-08a3f2537c6c" (UID: "670153e4-0ac6-4ae8-ab14-08a3f2537c6c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:29:14 crc kubenswrapper[4904]: I0223 10:29:14.967187 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "670153e4-0ac6-4ae8-ab14-08a3f2537c6c" (UID: "670153e4-0ac6-4ae8-ab14-08a3f2537c6c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:29:14 crc kubenswrapper[4904]: I0223 10:29:14.970036 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "670153e4-0ac6-4ae8-ab14-08a3f2537c6c" (UID: "670153e4-0ac6-4ae8-ab14-08a3f2537c6c"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 10:29:14 crc kubenswrapper[4904]: I0223 10:29:14.989063 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-config-data" (OuterVolumeSpecName: "config-data") pod "670153e4-0ac6-4ae8-ab14-08a3f2537c6c" (UID: "670153e4-0ac6-4ae8-ab14-08a3f2537c6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.030079 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-server-conf" (OuterVolumeSpecName: "server-conf") pod "670153e4-0ac6-4ae8-ab14-08a3f2537c6c" (UID: "670153e4-0ac6-4ae8-ab14-08a3f2537c6c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.036988 4904 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.037027 4904 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.037040 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb7bl\" (UniqueName: \"kubernetes.io/projected/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-kube-api-access-bb7bl\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.037051 4904 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.037060 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.037069 4904 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-pod-info\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.037079 4904 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.037087 4904 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-server-conf\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.037095 4904 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.078366 4904 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.083290 4904 generic.go:334] "Generic (PLEG): container finished" podID="670153e4-0ac6-4ae8-ab14-08a3f2537c6c" containerID="2331cfff666a7b7cb9c388cc12b784999c7f50d807279a12dc2a25f1d62ba33f" exitCode=0 Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.083361 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"670153e4-0ac6-4ae8-ab14-08a3f2537c6c","Type":"ContainerDied","Data":"2331cfff666a7b7cb9c388cc12b784999c7f50d807279a12dc2a25f1d62ba33f"} Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.083392 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"670153e4-0ac6-4ae8-ab14-08a3f2537c6c","Type":"ContainerDied","Data":"ea83e583a3e807f9dab4b412e935d9446e21df2abf1b2d42e853ee8d8d71fef1"} Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.083412 4904 scope.go:117] "RemoveContainer" containerID="2331cfff666a7b7cb9c388cc12b784999c7f50d807279a12dc2a25f1d62ba33f" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.083555 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.101943 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "670153e4-0ac6-4ae8-ab14-08a3f2537c6c" (UID: "670153e4-0ac6-4ae8-ab14-08a3f2537c6c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.115059 4904 scope.go:117] "RemoveContainer" containerID="157af569ff1c401c88a51da75639bbf328ab074d17dda426a1cb24639506a652" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.138883 4904 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/670153e4-0ac6-4ae8-ab14-08a3f2537c6c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.138903 4904 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.139142 4904 scope.go:117] "RemoveContainer" containerID="2331cfff666a7b7cb9c388cc12b784999c7f50d807279a12dc2a25f1d62ba33f" Feb 23 10:29:15 crc kubenswrapper[4904]: E0223 10:29:15.139646 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2331cfff666a7b7cb9c388cc12b784999c7f50d807279a12dc2a25f1d62ba33f\": container with ID starting with 2331cfff666a7b7cb9c388cc12b784999c7f50d807279a12dc2a25f1d62ba33f not found: ID does not exist" containerID="2331cfff666a7b7cb9c388cc12b784999c7f50d807279a12dc2a25f1d62ba33f" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.139705 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2331cfff666a7b7cb9c388cc12b784999c7f50d807279a12dc2a25f1d62ba33f"} err="failed to get container status \"2331cfff666a7b7cb9c388cc12b784999c7f50d807279a12dc2a25f1d62ba33f\": rpc error: code = NotFound desc = could not find container \"2331cfff666a7b7cb9c388cc12b784999c7f50d807279a12dc2a25f1d62ba33f\": container with ID starting with 2331cfff666a7b7cb9c388cc12b784999c7f50d807279a12dc2a25f1d62ba33f not found: ID does not exist" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.139750 4904 scope.go:117] "RemoveContainer" containerID="157af569ff1c401c88a51da75639bbf328ab074d17dda426a1cb24639506a652" Feb 23 10:29:15 crc kubenswrapper[4904]: E0223 10:29:15.140642 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"157af569ff1c401c88a51da75639bbf328ab074d17dda426a1cb24639506a652\": container with ID starting with 157af569ff1c401c88a51da75639bbf328ab074d17dda426a1cb24639506a652 not found: ID does not exist" containerID="157af569ff1c401c88a51da75639bbf328ab074d17dda426a1cb24639506a652" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.140698 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"157af569ff1c401c88a51da75639bbf328ab074d17dda426a1cb24639506a652"} err="failed to get container status \"157af569ff1c401c88a51da75639bbf328ab074d17dda426a1cb24639506a652\": rpc error: code = NotFound desc = could not find container \"157af569ff1c401c88a51da75639bbf328ab074d17dda426a1cb24639506a652\": container with ID starting with 157af569ff1c401c88a51da75639bbf328ab074d17dda426a1cb24639506a652 not found: ID does not exist" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.485152 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.537775 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.546784 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 10:29:15 crc kubenswrapper[4904]: E0223 10:29:15.547277 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="670153e4-0ac6-4ae8-ab14-08a3f2537c6c" containerName="setup-container" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.547295 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="670153e4-0ac6-4ae8-ab14-08a3f2537c6c" containerName="setup-container" Feb 23 10:29:15 crc kubenswrapper[4904]: E0223 10:29:15.547316 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="670153e4-0ac6-4ae8-ab14-08a3f2537c6c" containerName="rabbitmq" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.547322 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="670153e4-0ac6-4ae8-ab14-08a3f2537c6c" containerName="rabbitmq" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.547531 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="670153e4-0ac6-4ae8-ab14-08a3f2537c6c" containerName="rabbitmq" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.548667 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.555192 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.555249 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.555406 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.555534 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.555644 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-gvtt5" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.555766 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.555868 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.569406 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.654602 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.654663 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.654689 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.654738 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.654766 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.654788 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w6b6\" (UniqueName: \"kubernetes.io/projected/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-kube-api-access-7w6b6\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.654809 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.654830 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-config-data\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.654863 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.654882 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.654923 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.757142 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.757223 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.757266 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.757728 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.758305 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.758633 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.758907 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.758984 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w6b6\" (UniqueName: \"kubernetes.io/projected/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-kube-api-access-7w6b6\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.759065 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.759128 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-config-data\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.759179 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.759272 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.759332 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.759481 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.760099 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-config-data\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.761156 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.761394 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.762647 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.767674 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.767842 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.768075 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.779727 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w6b6\" (UniqueName: \"kubernetes.io/projected/8d3577f6-3d30-4a6c-9485-0429f1eb87f5-kube-api-access-7w6b6\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.824786 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"8d3577f6-3d30-4a6c-9485-0429f1eb87f5\") " pod="openstack/rabbitmq-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.906856 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:15 crc kubenswrapper[4904]: I0223 10:29:15.907659 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.066823 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e626c7f2-db46-4757-bd05-eedfba7b5fc8-pod-info\") pod \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.066953 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e626c7f2-db46-4757-bd05-eedfba7b5fc8-config-data\") pod \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.066984 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e626c7f2-db46-4757-bd05-eedfba7b5fc8-server-conf\") pod \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.067044 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e626c7f2-db46-4757-bd05-eedfba7b5fc8-erlang-cookie-secret\") pod \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.067106 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e626c7f2-db46-4757-bd05-eedfba7b5fc8-rabbitmq-erlang-cookie\") pod \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.067148 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.067207 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e626c7f2-db46-4757-bd05-eedfba7b5fc8-plugins-conf\") pod \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.067248 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e626c7f2-db46-4757-bd05-eedfba7b5fc8-rabbitmq-plugins\") pod \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.067279 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e626c7f2-db46-4757-bd05-eedfba7b5fc8-rabbitmq-confd\") pod \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.067337 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e626c7f2-db46-4757-bd05-eedfba7b5fc8-rabbitmq-tls\") pod \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.067365 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckll7\" (UniqueName: \"kubernetes.io/projected/e626c7f2-db46-4757-bd05-eedfba7b5fc8-kube-api-access-ckll7\") pod \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\" (UID: \"e626c7f2-db46-4757-bd05-eedfba7b5fc8\") " Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.069527 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e626c7f2-db46-4757-bd05-eedfba7b5fc8-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e626c7f2-db46-4757-bd05-eedfba7b5fc8" (UID: "e626c7f2-db46-4757-bd05-eedfba7b5fc8"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.069571 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e626c7f2-db46-4757-bd05-eedfba7b5fc8-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e626c7f2-db46-4757-bd05-eedfba7b5fc8" (UID: "e626c7f2-db46-4757-bd05-eedfba7b5fc8"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.070613 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e626c7f2-db46-4757-bd05-eedfba7b5fc8-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e626c7f2-db46-4757-bd05-eedfba7b5fc8" (UID: "e626c7f2-db46-4757-bd05-eedfba7b5fc8"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.074361 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e626c7f2-db46-4757-bd05-eedfba7b5fc8-pod-info" (OuterVolumeSpecName: "pod-info") pod "e626c7f2-db46-4757-bd05-eedfba7b5fc8" (UID: "e626c7f2-db46-4757-bd05-eedfba7b5fc8"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.074809 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "e626c7f2-db46-4757-bd05-eedfba7b5fc8" (UID: "e626c7f2-db46-4757-bd05-eedfba7b5fc8"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.075470 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e626c7f2-db46-4757-bd05-eedfba7b5fc8-kube-api-access-ckll7" (OuterVolumeSpecName: "kube-api-access-ckll7") pod "e626c7f2-db46-4757-bd05-eedfba7b5fc8" (UID: "e626c7f2-db46-4757-bd05-eedfba7b5fc8"). InnerVolumeSpecName "kube-api-access-ckll7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.075920 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e626c7f2-db46-4757-bd05-eedfba7b5fc8-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e626c7f2-db46-4757-bd05-eedfba7b5fc8" (UID: "e626c7f2-db46-4757-bd05-eedfba7b5fc8"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.076324 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e626c7f2-db46-4757-bd05-eedfba7b5fc8-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e626c7f2-db46-4757-bd05-eedfba7b5fc8" (UID: "e626c7f2-db46-4757-bd05-eedfba7b5fc8"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.094389 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e626c7f2-db46-4757-bd05-eedfba7b5fc8-config-data" (OuterVolumeSpecName: "config-data") pod "e626c7f2-db46-4757-bd05-eedfba7b5fc8" (UID: "e626c7f2-db46-4757-bd05-eedfba7b5fc8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.103347 4904 generic.go:334] "Generic (PLEG): container finished" podID="e626c7f2-db46-4757-bd05-eedfba7b5fc8" containerID="efebd95eafc9c0b73a2d7e20abcfc5347c62e16393ceeb0a2f2d64cf8feccba2" exitCode=0 Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.103388 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e626c7f2-db46-4757-bd05-eedfba7b5fc8","Type":"ContainerDied","Data":"efebd95eafc9c0b73a2d7e20abcfc5347c62e16393ceeb0a2f2d64cf8feccba2"} Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.103414 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e626c7f2-db46-4757-bd05-eedfba7b5fc8","Type":"ContainerDied","Data":"9aee9a816991b9d4ee5f677bce2c8bfb8e90d4a4efb1b72bd7fe5b8d02f709be"} Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.103415 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.103431 4904 scope.go:117] "RemoveContainer" containerID="efebd95eafc9c0b73a2d7e20abcfc5347c62e16393ceeb0a2f2d64cf8feccba2" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.148565 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e626c7f2-db46-4757-bd05-eedfba7b5fc8-server-conf" (OuterVolumeSpecName: "server-conf") pod "e626c7f2-db46-4757-bd05-eedfba7b5fc8" (UID: "e626c7f2-db46-4757-bd05-eedfba7b5fc8"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.148966 4904 scope.go:117] "RemoveContainer" containerID="465ea43cc0d67644504723e43babe568f422077a27d1ea2c430281933c762954" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.170164 4904 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e626c7f2-db46-4757-bd05-eedfba7b5fc8-server-conf\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.170192 4904 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e626c7f2-db46-4757-bd05-eedfba7b5fc8-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.170203 4904 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e626c7f2-db46-4757-bd05-eedfba7b5fc8-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.170223 4904 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.170233 4904 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e626c7f2-db46-4757-bd05-eedfba7b5fc8-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.170241 4904 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e626c7f2-db46-4757-bd05-eedfba7b5fc8-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.170249 4904 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e626c7f2-db46-4757-bd05-eedfba7b5fc8-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.170257 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckll7\" (UniqueName: \"kubernetes.io/projected/e626c7f2-db46-4757-bd05-eedfba7b5fc8-kube-api-access-ckll7\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.170265 4904 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e626c7f2-db46-4757-bd05-eedfba7b5fc8-pod-info\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.170274 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e626c7f2-db46-4757-bd05-eedfba7b5fc8-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.200410 4904 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.200600 4904 scope.go:117] "RemoveContainer" containerID="efebd95eafc9c0b73a2d7e20abcfc5347c62e16393ceeb0a2f2d64cf8feccba2" Feb 23 10:29:16 crc kubenswrapper[4904]: E0223 10:29:16.201134 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efebd95eafc9c0b73a2d7e20abcfc5347c62e16393ceeb0a2f2d64cf8feccba2\": container with ID starting with efebd95eafc9c0b73a2d7e20abcfc5347c62e16393ceeb0a2f2d64cf8feccba2 not found: ID does not exist" containerID="efebd95eafc9c0b73a2d7e20abcfc5347c62e16393ceeb0a2f2d64cf8feccba2" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.201199 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efebd95eafc9c0b73a2d7e20abcfc5347c62e16393ceeb0a2f2d64cf8feccba2"} err="failed to get container status \"efebd95eafc9c0b73a2d7e20abcfc5347c62e16393ceeb0a2f2d64cf8feccba2\": rpc error: code = NotFound desc = could not find container \"efebd95eafc9c0b73a2d7e20abcfc5347c62e16393ceeb0a2f2d64cf8feccba2\": container with ID starting with efebd95eafc9c0b73a2d7e20abcfc5347c62e16393ceeb0a2f2d64cf8feccba2 not found: ID does not exist" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.201251 4904 scope.go:117] "RemoveContainer" containerID="465ea43cc0d67644504723e43babe568f422077a27d1ea2c430281933c762954" Feb 23 10:29:16 crc kubenswrapper[4904]: E0223 10:29:16.202674 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"465ea43cc0d67644504723e43babe568f422077a27d1ea2c430281933c762954\": container with ID starting with 465ea43cc0d67644504723e43babe568f422077a27d1ea2c430281933c762954 not found: ID does not exist" containerID="465ea43cc0d67644504723e43babe568f422077a27d1ea2c430281933c762954" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.202696 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"465ea43cc0d67644504723e43babe568f422077a27d1ea2c430281933c762954"} err="failed to get container status \"465ea43cc0d67644504723e43babe568f422077a27d1ea2c430281933c762954\": rpc error: code = NotFound desc = could not find container \"465ea43cc0d67644504723e43babe568f422077a27d1ea2c430281933c762954\": container with ID starting with 465ea43cc0d67644504723e43babe568f422077a27d1ea2c430281933c762954 not found: ID does not exist" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.269580 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e626c7f2-db46-4757-bd05-eedfba7b5fc8-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e626c7f2-db46-4757-bd05-eedfba7b5fc8" (UID: "e626c7f2-db46-4757-bd05-eedfba7b5fc8"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.271619 4904 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.271646 4904 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e626c7f2-db46-4757-bd05-eedfba7b5fc8-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.433658 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.460914 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.477345 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.489972 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 10:29:16 crc kubenswrapper[4904]: E0223 10:29:16.491006 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e626c7f2-db46-4757-bd05-eedfba7b5fc8" containerName="rabbitmq" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.491075 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e626c7f2-db46-4757-bd05-eedfba7b5fc8" containerName="rabbitmq" Feb 23 10:29:16 crc kubenswrapper[4904]: E0223 10:29:16.491139 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e626c7f2-db46-4757-bd05-eedfba7b5fc8" containerName="setup-container" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.491217 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e626c7f2-db46-4757-bd05-eedfba7b5fc8" containerName="setup-container" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.491472 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e626c7f2-db46-4757-bd05-eedfba7b5fc8" containerName="rabbitmq" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.500433 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.507975 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.508259 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.508391 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.508497 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.508626 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9vsnr" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.508759 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.508886 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.512313 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.679927 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f66afb49-9f1b-43ea-966f-8aaf91eea84a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.680095 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f66afb49-9f1b-43ea-966f-8aaf91eea84a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.680414 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f66afb49-9f1b-43ea-966f-8aaf91eea84a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.680537 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f66afb49-9f1b-43ea-966f-8aaf91eea84a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.680587 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f66afb49-9f1b-43ea-966f-8aaf91eea84a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.680741 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f66afb49-9f1b-43ea-966f-8aaf91eea84a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.680849 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.680920 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f66afb49-9f1b-43ea-966f-8aaf91eea84a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.681036 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f66afb49-9f1b-43ea-966f-8aaf91eea84a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.681097 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f66afb49-9f1b-43ea-966f-8aaf91eea84a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.681157 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8dnn\" (UniqueName: \"kubernetes.io/projected/f66afb49-9f1b-43ea-966f-8aaf91eea84a-kube-api-access-q8dnn\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.782923 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f66afb49-9f1b-43ea-966f-8aaf91eea84a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.783191 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f66afb49-9f1b-43ea-966f-8aaf91eea84a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.783231 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f66afb49-9f1b-43ea-966f-8aaf91eea84a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.783253 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f66afb49-9f1b-43ea-966f-8aaf91eea84a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.783284 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f66afb49-9f1b-43ea-966f-8aaf91eea84a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.783314 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.783335 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f66afb49-9f1b-43ea-966f-8aaf91eea84a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.783368 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f66afb49-9f1b-43ea-966f-8aaf91eea84a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.783387 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f66afb49-9f1b-43ea-966f-8aaf91eea84a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.783410 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8dnn\" (UniqueName: \"kubernetes.io/projected/f66afb49-9f1b-43ea-966f-8aaf91eea84a-kube-api-access-q8dnn\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.783480 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f66afb49-9f1b-43ea-966f-8aaf91eea84a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.783683 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.784471 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f66afb49-9f1b-43ea-966f-8aaf91eea84a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.784644 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f66afb49-9f1b-43ea-966f-8aaf91eea84a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.784685 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f66afb49-9f1b-43ea-966f-8aaf91eea84a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.785002 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f66afb49-9f1b-43ea-966f-8aaf91eea84a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.785027 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f66afb49-9f1b-43ea-966f-8aaf91eea84a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.789808 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f66afb49-9f1b-43ea-966f-8aaf91eea84a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.791863 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f66afb49-9f1b-43ea-966f-8aaf91eea84a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.792609 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f66afb49-9f1b-43ea-966f-8aaf91eea84a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.793792 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f66afb49-9f1b-43ea-966f-8aaf91eea84a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.836682 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.842584 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8dnn\" (UniqueName: \"kubernetes.io/projected/f66afb49-9f1b-43ea-966f-8aaf91eea84a-kube-api-access-q8dnn\") pod \"rabbitmq-cell1-server-0\" (UID: \"f66afb49-9f1b-43ea-966f-8aaf91eea84a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.879726 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-4vj4n"] Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.882122 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.885060 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.917866 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-4vj4n"] Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.973913 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.986918 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbwck\" (UniqueName: \"kubernetes.io/projected/03ea6cb5-c882-497e-b913-600e95d95c94-kube-api-access-gbwck\") pod \"dnsmasq-dns-79bd4cc8c9-4vj4n\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.986964 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-4vj4n\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.986991 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-4vj4n\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.987161 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-4vj4n\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.987234 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-config\") pod \"dnsmasq-dns-79bd4cc8c9-4vj4n\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.987306 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-4vj4n\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:16 crc kubenswrapper[4904]: I0223 10:29:16.987331 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-4vj4n\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.089098 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-4vj4n\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.089165 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-4vj4n\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.089190 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-config\") pod \"dnsmasq-dns-79bd4cc8c9-4vj4n\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.089224 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-4vj4n\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.089240 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-4vj4n\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.089371 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbwck\" (UniqueName: \"kubernetes.io/projected/03ea6cb5-c882-497e-b913-600e95d95c94-kube-api-access-gbwck\") pod \"dnsmasq-dns-79bd4cc8c9-4vj4n\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.089394 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-4vj4n\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.091883 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-4vj4n\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.092471 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-4vj4n\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.093124 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-4vj4n\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.093297 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-4vj4n\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.093650 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-4vj4n\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.095402 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-config\") pod \"dnsmasq-dns-79bd4cc8c9-4vj4n\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.109705 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbwck\" (UniqueName: \"kubernetes.io/projected/03ea6cb5-c882-497e-b913-600e95d95c94-kube-api-access-gbwck\") pod \"dnsmasq-dns-79bd4cc8c9-4vj4n\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.118933 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8d3577f6-3d30-4a6c-9485-0429f1eb87f5","Type":"ContainerStarted","Data":"2ded568d3b774ed89751f170eab2497c98a2089e390148d8badaec426bceb0e5"} Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.224992 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.284158 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="670153e4-0ac6-4ae8-ab14-08a3f2537c6c" path="/var/lib/kubelet/pods/670153e4-0ac6-4ae8-ab14-08a3f2537c6c/volumes" Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.291362 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e626c7f2-db46-4757-bd05-eedfba7b5fc8" path="/var/lib/kubelet/pods/e626c7f2-db46-4757-bd05-eedfba7b5fc8/volumes" Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.399122 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.399184 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.399239 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.400192 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"239d9f69c1c753f6e98d8261e34261e4ea3e4b4d4d57f0a5fcbe49086812f15b"} pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.400251 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" containerID="cri-o://239d9f69c1c753f6e98d8261e34261e4ea3e4b4d4d57f0a5fcbe49086812f15b" gracePeriod=600 Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.475458 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 10:29:17 crc kubenswrapper[4904]: W0223 10:29:17.476950 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf66afb49_9f1b_43ea_966f_8aaf91eea84a.slice/crio-6c4871cc816eb901cfd2719d7bbb1baf95a863e2f52063bce948bdc24edf0088 WatchSource:0}: Error finding container 6c4871cc816eb901cfd2719d7bbb1baf95a863e2f52063bce948bdc24edf0088: Status 404 returned error can't find the container with id 6c4871cc816eb901cfd2719d7bbb1baf95a863e2f52063bce948bdc24edf0088 Feb 23 10:29:17 crc kubenswrapper[4904]: I0223 10:29:17.716541 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-4vj4n"] Feb 23 10:29:18 crc kubenswrapper[4904]: I0223 10:29:18.132782 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f66afb49-9f1b-43ea-966f-8aaf91eea84a","Type":"ContainerStarted","Data":"6c4871cc816eb901cfd2719d7bbb1baf95a863e2f52063bce948bdc24edf0088"} Feb 23 10:29:18 crc kubenswrapper[4904]: I0223 10:29:18.134073 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" event={"ID":"03ea6cb5-c882-497e-b913-600e95d95c94","Type":"ContainerStarted","Data":"50d301bd141c089fb3f90beb0c5125f531ca7f30d6ac8d157ef88dbb0d05f0f4"} Feb 23 10:29:18 crc kubenswrapper[4904]: I0223 10:29:18.136430 4904 generic.go:334] "Generic (PLEG): container finished" podID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerID="239d9f69c1c753f6e98d8261e34261e4ea3e4b4d4d57f0a5fcbe49086812f15b" exitCode=0 Feb 23 10:29:18 crc kubenswrapper[4904]: I0223 10:29:18.136484 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerDied","Data":"239d9f69c1c753f6e98d8261e34261e4ea3e4b4d4d57f0a5fcbe49086812f15b"} Feb 23 10:29:18 crc kubenswrapper[4904]: I0223 10:29:18.136568 4904 scope.go:117] "RemoveContainer" containerID="a6264b62be7a8acc04b5529c5a569156f7d3e2773a196aa33b2133b46c2a62f4" Feb 23 10:29:19 crc kubenswrapper[4904]: I0223 10:29:19.155180 4904 generic.go:334] "Generic (PLEG): container finished" podID="03ea6cb5-c882-497e-b913-600e95d95c94" containerID="02d57f56a4cfc9e9ac548c8ee7525fda2da032ca2f85c4f23949136388d0ad53" exitCode=0 Feb 23 10:29:19 crc kubenswrapper[4904]: I0223 10:29:19.155287 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" event={"ID":"03ea6cb5-c882-497e-b913-600e95d95c94","Type":"ContainerDied","Data":"02d57f56a4cfc9e9ac548c8ee7525fda2da032ca2f85c4f23949136388d0ad53"} Feb 23 10:29:19 crc kubenswrapper[4904]: I0223 10:29:19.161677 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerStarted","Data":"73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112"} Feb 23 10:29:19 crc kubenswrapper[4904]: I0223 10:29:19.167213 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8d3577f6-3d30-4a6c-9485-0429f1eb87f5","Type":"ContainerStarted","Data":"9446454bb2dd663d41a02f68a77a5ca4821383e93188b9eec50aad2ce0fa6404"} Feb 23 10:29:20 crc kubenswrapper[4904]: I0223 10:29:20.186294 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" event={"ID":"03ea6cb5-c882-497e-b913-600e95d95c94","Type":"ContainerStarted","Data":"1ec7d259bf388e2b5a7b0744ee3d3d6c6612588e02e081d22e35bafb9c5fa1ce"} Feb 23 10:29:20 crc kubenswrapper[4904]: I0223 10:29:20.188477 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:20 crc kubenswrapper[4904]: I0223 10:29:20.219471 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" podStartSLOduration=4.219451551 podStartE2EDuration="4.219451551s" podCreationTimestamp="2026-02-23 10:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:29:20.215284402 +0000 UTC m=+1393.635657925" watchObservedRunningTime="2026-02-23 10:29:20.219451551 +0000 UTC m=+1393.639825074" Feb 23 10:29:21 crc kubenswrapper[4904]: I0223 10:29:21.208893 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f66afb49-9f1b-43ea-966f-8aaf91eea84a","Type":"ContainerStarted","Data":"c9449273415eec7d767d34761320af12ba52204331906e8aecfcd14dfb4cdba4"} Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.227435 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.370497 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-m8pl8"] Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.372772 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" podUID="47101850-5ecb-4158-8a6e-2c4541850b48" containerName="dnsmasq-dns" containerID="cri-o://dd88e7ac49caeb03a381d362bc656b97437dcd3e8c7aa2e3770a06ba4cbaee7b" gracePeriod=10 Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.538150 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f4d4c4b7-rfkgb"] Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.540003 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.563431 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f4d4c4b7-rfkgb"] Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.665475 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20e81db0-f0e3-4948-9f05-eb34de21e118-ovsdbserver-nb\") pod \"dnsmasq-dns-f4d4c4b7-rfkgb\" (UID: \"20e81db0-f0e3-4948-9f05-eb34de21e118\") " pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.665542 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/20e81db0-f0e3-4948-9f05-eb34de21e118-openstack-edpm-ipam\") pod \"dnsmasq-dns-f4d4c4b7-rfkgb\" (UID: \"20e81db0-f0e3-4948-9f05-eb34de21e118\") " pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.665582 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20e81db0-f0e3-4948-9f05-eb34de21e118-ovsdbserver-sb\") pod \"dnsmasq-dns-f4d4c4b7-rfkgb\" (UID: \"20e81db0-f0e3-4948-9f05-eb34de21e118\") " pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.665639 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20e81db0-f0e3-4948-9f05-eb34de21e118-dns-svc\") pod \"dnsmasq-dns-f4d4c4b7-rfkgb\" (UID: \"20e81db0-f0e3-4948-9f05-eb34de21e118\") " pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.665691 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20e81db0-f0e3-4948-9f05-eb34de21e118-config\") pod \"dnsmasq-dns-f4d4c4b7-rfkgb\" (UID: \"20e81db0-f0e3-4948-9f05-eb34de21e118\") " pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.665735 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20e81db0-f0e3-4948-9f05-eb34de21e118-dns-swift-storage-0\") pod \"dnsmasq-dns-f4d4c4b7-rfkgb\" (UID: \"20e81db0-f0e3-4948-9f05-eb34de21e118\") " pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.665782 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnbmx\" (UniqueName: \"kubernetes.io/projected/20e81db0-f0e3-4948-9f05-eb34de21e118-kube-api-access-bnbmx\") pod \"dnsmasq-dns-f4d4c4b7-rfkgb\" (UID: \"20e81db0-f0e3-4948-9f05-eb34de21e118\") " pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.768085 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20e81db0-f0e3-4948-9f05-eb34de21e118-dns-svc\") pod \"dnsmasq-dns-f4d4c4b7-rfkgb\" (UID: \"20e81db0-f0e3-4948-9f05-eb34de21e118\") " pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.768166 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20e81db0-f0e3-4948-9f05-eb34de21e118-config\") pod \"dnsmasq-dns-f4d4c4b7-rfkgb\" (UID: \"20e81db0-f0e3-4948-9f05-eb34de21e118\") " pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.768222 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20e81db0-f0e3-4948-9f05-eb34de21e118-dns-swift-storage-0\") pod \"dnsmasq-dns-f4d4c4b7-rfkgb\" (UID: \"20e81db0-f0e3-4948-9f05-eb34de21e118\") " pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.768290 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnbmx\" (UniqueName: \"kubernetes.io/projected/20e81db0-f0e3-4948-9f05-eb34de21e118-kube-api-access-bnbmx\") pod \"dnsmasq-dns-f4d4c4b7-rfkgb\" (UID: \"20e81db0-f0e3-4948-9f05-eb34de21e118\") " pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.768350 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20e81db0-f0e3-4948-9f05-eb34de21e118-ovsdbserver-nb\") pod \"dnsmasq-dns-f4d4c4b7-rfkgb\" (UID: \"20e81db0-f0e3-4948-9f05-eb34de21e118\") " pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.768392 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/20e81db0-f0e3-4948-9f05-eb34de21e118-openstack-edpm-ipam\") pod \"dnsmasq-dns-f4d4c4b7-rfkgb\" (UID: \"20e81db0-f0e3-4948-9f05-eb34de21e118\") " pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.768448 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20e81db0-f0e3-4948-9f05-eb34de21e118-ovsdbserver-sb\") pod \"dnsmasq-dns-f4d4c4b7-rfkgb\" (UID: \"20e81db0-f0e3-4948-9f05-eb34de21e118\") " pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.771246 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20e81db0-f0e3-4948-9f05-eb34de21e118-dns-svc\") pod \"dnsmasq-dns-f4d4c4b7-rfkgb\" (UID: \"20e81db0-f0e3-4948-9f05-eb34de21e118\") " pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.771580 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20e81db0-f0e3-4948-9f05-eb34de21e118-dns-swift-storage-0\") pod \"dnsmasq-dns-f4d4c4b7-rfkgb\" (UID: \"20e81db0-f0e3-4948-9f05-eb34de21e118\") " pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.772756 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20e81db0-f0e3-4948-9f05-eb34de21e118-config\") pod \"dnsmasq-dns-f4d4c4b7-rfkgb\" (UID: \"20e81db0-f0e3-4948-9f05-eb34de21e118\") " pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.776642 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20e81db0-f0e3-4948-9f05-eb34de21e118-ovsdbserver-sb\") pod \"dnsmasq-dns-f4d4c4b7-rfkgb\" (UID: \"20e81db0-f0e3-4948-9f05-eb34de21e118\") " pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.776690 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/20e81db0-f0e3-4948-9f05-eb34de21e118-openstack-edpm-ipam\") pod \"dnsmasq-dns-f4d4c4b7-rfkgb\" (UID: \"20e81db0-f0e3-4948-9f05-eb34de21e118\") " pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.776951 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20e81db0-f0e3-4948-9f05-eb34de21e118-ovsdbserver-nb\") pod \"dnsmasq-dns-f4d4c4b7-rfkgb\" (UID: \"20e81db0-f0e3-4948-9f05-eb34de21e118\") " pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.792262 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnbmx\" (UniqueName: \"kubernetes.io/projected/20e81db0-f0e3-4948-9f05-eb34de21e118-kube-api-access-bnbmx\") pod \"dnsmasq-dns-f4d4c4b7-rfkgb\" (UID: \"20e81db0-f0e3-4948-9f05-eb34de21e118\") " pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.868188 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.886552 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.971336 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-ovsdbserver-sb\") pod \"47101850-5ecb-4158-8a6e-2c4541850b48\" (UID: \"47101850-5ecb-4158-8a6e-2c4541850b48\") " Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.971766 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-ovsdbserver-nb\") pod \"47101850-5ecb-4158-8a6e-2c4541850b48\" (UID: \"47101850-5ecb-4158-8a6e-2c4541850b48\") " Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.972383 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-dns-swift-storage-0\") pod \"47101850-5ecb-4158-8a6e-2c4541850b48\" (UID: \"47101850-5ecb-4158-8a6e-2c4541850b48\") " Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.972474 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-config\") pod \"47101850-5ecb-4158-8a6e-2c4541850b48\" (UID: \"47101850-5ecb-4158-8a6e-2c4541850b48\") " Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.972835 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-dns-svc\") pod \"47101850-5ecb-4158-8a6e-2c4541850b48\" (UID: \"47101850-5ecb-4158-8a6e-2c4541850b48\") " Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.973010 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f9td\" (UniqueName: \"kubernetes.io/projected/47101850-5ecb-4158-8a6e-2c4541850b48-kube-api-access-5f9td\") pod \"47101850-5ecb-4158-8a6e-2c4541850b48\" (UID: \"47101850-5ecb-4158-8a6e-2c4541850b48\") " Feb 23 10:29:27 crc kubenswrapper[4904]: I0223 10:29:27.979678 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47101850-5ecb-4158-8a6e-2c4541850b48-kube-api-access-5f9td" (OuterVolumeSpecName: "kube-api-access-5f9td") pod "47101850-5ecb-4158-8a6e-2c4541850b48" (UID: "47101850-5ecb-4158-8a6e-2c4541850b48"). InnerVolumeSpecName "kube-api-access-5f9td". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:29:28 crc kubenswrapper[4904]: I0223 10:29:28.054941 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "47101850-5ecb-4158-8a6e-2c4541850b48" (UID: "47101850-5ecb-4158-8a6e-2c4541850b48"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:29:28 crc kubenswrapper[4904]: I0223 10:29:28.063441 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "47101850-5ecb-4158-8a6e-2c4541850b48" (UID: "47101850-5ecb-4158-8a6e-2c4541850b48"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:29:28 crc kubenswrapper[4904]: I0223 10:29:28.075362 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "47101850-5ecb-4158-8a6e-2c4541850b48" (UID: "47101850-5ecb-4158-8a6e-2c4541850b48"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:29:28 crc kubenswrapper[4904]: I0223 10:29:28.075861 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f9td\" (UniqueName: \"kubernetes.io/projected/47101850-5ecb-4158-8a6e-2c4541850b48-kube-api-access-5f9td\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:28 crc kubenswrapper[4904]: I0223 10:29:28.081640 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:28 crc kubenswrapper[4904]: I0223 10:29:28.081656 4904 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:28 crc kubenswrapper[4904]: I0223 10:29:28.095899 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-config" (OuterVolumeSpecName: "config") pod "47101850-5ecb-4158-8a6e-2c4541850b48" (UID: "47101850-5ecb-4158-8a6e-2c4541850b48"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:29:28 crc kubenswrapper[4904]: I0223 10:29:28.102760 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "47101850-5ecb-4158-8a6e-2c4541850b48" (UID: "47101850-5ecb-4158-8a6e-2c4541850b48"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:29:28 crc kubenswrapper[4904]: I0223 10:29:28.189283 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:28 crc kubenswrapper[4904]: I0223 10:29:28.189599 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:28 crc kubenswrapper[4904]: I0223 10:29:28.189617 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47101850-5ecb-4158-8a6e-2c4541850b48-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:28 crc kubenswrapper[4904]: I0223 10:29:28.316673 4904 generic.go:334] "Generic (PLEG): container finished" podID="47101850-5ecb-4158-8a6e-2c4541850b48" containerID="dd88e7ac49caeb03a381d362bc656b97437dcd3e8c7aa2e3770a06ba4cbaee7b" exitCode=0 Feb 23 10:29:28 crc kubenswrapper[4904]: I0223 10:29:28.316733 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" event={"ID":"47101850-5ecb-4158-8a6e-2c4541850b48","Type":"ContainerDied","Data":"dd88e7ac49caeb03a381d362bc656b97437dcd3e8c7aa2e3770a06ba4cbaee7b"} Feb 23 10:29:28 crc kubenswrapper[4904]: I0223 10:29:28.316763 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" event={"ID":"47101850-5ecb-4158-8a6e-2c4541850b48","Type":"ContainerDied","Data":"89a06f7ff63e4bfb522b997f34f137f39b92f890ca2d06298b0ea285aca87ce1"} Feb 23 10:29:28 crc kubenswrapper[4904]: I0223 10:29:28.316779 4904 scope.go:117] "RemoveContainer" containerID="dd88e7ac49caeb03a381d362bc656b97437dcd3e8c7aa2e3770a06ba4cbaee7b" Feb 23 10:29:28 crc kubenswrapper[4904]: I0223 10:29:28.316941 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-m8pl8" Feb 23 10:29:28 crc kubenswrapper[4904]: I0223 10:29:28.353599 4904 scope.go:117] "RemoveContainer" containerID="550f3df47923553542ce5205731d33ad359c3393e7d8f3c0434b26626eeb7970" Feb 23 10:29:28 crc kubenswrapper[4904]: I0223 10:29:28.354500 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-m8pl8"] Feb 23 10:29:28 crc kubenswrapper[4904]: I0223 10:29:28.363055 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-m8pl8"] Feb 23 10:29:28 crc kubenswrapper[4904]: I0223 10:29:28.387949 4904 scope.go:117] "RemoveContainer" containerID="dd88e7ac49caeb03a381d362bc656b97437dcd3e8c7aa2e3770a06ba4cbaee7b" Feb 23 10:29:28 crc kubenswrapper[4904]: E0223 10:29:28.388911 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd88e7ac49caeb03a381d362bc656b97437dcd3e8c7aa2e3770a06ba4cbaee7b\": container with ID starting with dd88e7ac49caeb03a381d362bc656b97437dcd3e8c7aa2e3770a06ba4cbaee7b not found: ID does not exist" containerID="dd88e7ac49caeb03a381d362bc656b97437dcd3e8c7aa2e3770a06ba4cbaee7b" Feb 23 10:29:28 crc kubenswrapper[4904]: I0223 10:29:28.388956 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd88e7ac49caeb03a381d362bc656b97437dcd3e8c7aa2e3770a06ba4cbaee7b"} err="failed to get container status \"dd88e7ac49caeb03a381d362bc656b97437dcd3e8c7aa2e3770a06ba4cbaee7b\": rpc error: code = NotFound desc = could not find container \"dd88e7ac49caeb03a381d362bc656b97437dcd3e8c7aa2e3770a06ba4cbaee7b\": container with ID starting with dd88e7ac49caeb03a381d362bc656b97437dcd3e8c7aa2e3770a06ba4cbaee7b not found: ID does not exist" Feb 23 10:29:28 crc kubenswrapper[4904]: I0223 10:29:28.388981 4904 scope.go:117] "RemoveContainer" containerID="550f3df47923553542ce5205731d33ad359c3393e7d8f3c0434b26626eeb7970" Feb 23 10:29:28 crc kubenswrapper[4904]: E0223 10:29:28.391188 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"550f3df47923553542ce5205731d33ad359c3393e7d8f3c0434b26626eeb7970\": container with ID starting with 550f3df47923553542ce5205731d33ad359c3393e7d8f3c0434b26626eeb7970 not found: ID does not exist" containerID="550f3df47923553542ce5205731d33ad359c3393e7d8f3c0434b26626eeb7970" Feb 23 10:29:28 crc kubenswrapper[4904]: I0223 10:29:28.391220 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"550f3df47923553542ce5205731d33ad359c3393e7d8f3c0434b26626eeb7970"} err="failed to get container status \"550f3df47923553542ce5205731d33ad359c3393e7d8f3c0434b26626eeb7970\": rpc error: code = NotFound desc = could not find container \"550f3df47923553542ce5205731d33ad359c3393e7d8f3c0434b26626eeb7970\": container with ID starting with 550f3df47923553542ce5205731d33ad359c3393e7d8f3c0434b26626eeb7970 not found: ID does not exist" Feb 23 10:29:28 crc kubenswrapper[4904]: I0223 10:29:28.397176 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f4d4c4b7-rfkgb"] Feb 23 10:29:28 crc kubenswrapper[4904]: W0223 10:29:28.399986 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20e81db0_f0e3_4948_9f05_eb34de21e118.slice/crio-288a318c8955b4a77a40231f695ce00426f6d045fd952b1757d70fe1f6958e3f WatchSource:0}: Error finding container 288a318c8955b4a77a40231f695ce00426f6d045fd952b1757d70fe1f6958e3f: Status 404 returned error can't find the container with id 288a318c8955b4a77a40231f695ce00426f6d045fd952b1757d70fe1f6958e3f Feb 23 10:29:29 crc kubenswrapper[4904]: I0223 10:29:29.271586 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47101850-5ecb-4158-8a6e-2c4541850b48" path="/var/lib/kubelet/pods/47101850-5ecb-4158-8a6e-2c4541850b48/volumes" Feb 23 10:29:29 crc kubenswrapper[4904]: I0223 10:29:29.334963 4904 generic.go:334] "Generic (PLEG): container finished" podID="20e81db0-f0e3-4948-9f05-eb34de21e118" containerID="2816e083f868ce7ca27242df80c7cd387068475c8a5a77864a9c67404e4694c8" exitCode=0 Feb 23 10:29:29 crc kubenswrapper[4904]: I0223 10:29:29.335034 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" event={"ID":"20e81db0-f0e3-4948-9f05-eb34de21e118","Type":"ContainerDied","Data":"2816e083f868ce7ca27242df80c7cd387068475c8a5a77864a9c67404e4694c8"} Feb 23 10:29:29 crc kubenswrapper[4904]: I0223 10:29:29.335075 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" event={"ID":"20e81db0-f0e3-4948-9f05-eb34de21e118","Type":"ContainerStarted","Data":"288a318c8955b4a77a40231f695ce00426f6d045fd952b1757d70fe1f6958e3f"} Feb 23 10:29:30 crc kubenswrapper[4904]: I0223 10:29:30.349901 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" event={"ID":"20e81db0-f0e3-4948-9f05-eb34de21e118","Type":"ContainerStarted","Data":"1a6a28842c3bbe3866a13b120ddf62104220cf9f46e92a89ef7e53d840d0564a"} Feb 23 10:29:30 crc kubenswrapper[4904]: I0223 10:29:30.350482 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:30 crc kubenswrapper[4904]: I0223 10:29:30.387879 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" podStartSLOduration=3.387857731 podStartE2EDuration="3.387857731s" podCreationTimestamp="2026-02-23 10:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:29:30.380998296 +0000 UTC m=+1403.801371849" watchObservedRunningTime="2026-02-23 10:29:30.387857731 +0000 UTC m=+1403.808231254" Feb 23 10:29:37 crc kubenswrapper[4904]: I0223 10:29:37.871022 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f4d4c4b7-rfkgb" Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.011219 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-4vj4n"] Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.011437 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" podUID="03ea6cb5-c882-497e-b913-600e95d95c94" containerName="dnsmasq-dns" containerID="cri-o://1ec7d259bf388e2b5a7b0744ee3d3d6c6612588e02e081d22e35bafb9c5fa1ce" gracePeriod=10 Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.469067 4904 generic.go:334] "Generic (PLEG): container finished" podID="03ea6cb5-c882-497e-b913-600e95d95c94" containerID="1ec7d259bf388e2b5a7b0744ee3d3d6c6612588e02e081d22e35bafb9c5fa1ce" exitCode=0 Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.471189 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" event={"ID":"03ea6cb5-c882-497e-b913-600e95d95c94","Type":"ContainerDied","Data":"1ec7d259bf388e2b5a7b0744ee3d3d6c6612588e02e081d22e35bafb9c5fa1ce"} Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.475517 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" event={"ID":"03ea6cb5-c882-497e-b913-600e95d95c94","Type":"ContainerDied","Data":"50d301bd141c089fb3f90beb0c5125f531ca7f30d6ac8d157ef88dbb0d05f0f4"} Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.475541 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50d301bd141c089fb3f90beb0c5125f531ca7f30d6ac8d157ef88dbb0d05f0f4" Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.531205 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.662519 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-openstack-edpm-ipam\") pod \"03ea6cb5-c882-497e-b913-600e95d95c94\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.662657 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbwck\" (UniqueName: \"kubernetes.io/projected/03ea6cb5-c882-497e-b913-600e95d95c94-kube-api-access-gbwck\") pod \"03ea6cb5-c882-497e-b913-600e95d95c94\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.663012 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-config\") pod \"03ea6cb5-c882-497e-b913-600e95d95c94\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.663121 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-dns-swift-storage-0\") pod \"03ea6cb5-c882-497e-b913-600e95d95c94\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.663184 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-ovsdbserver-nb\") pod \"03ea6cb5-c882-497e-b913-600e95d95c94\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.663218 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-dns-svc\") pod \"03ea6cb5-c882-497e-b913-600e95d95c94\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.663252 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-ovsdbserver-sb\") pod \"03ea6cb5-c882-497e-b913-600e95d95c94\" (UID: \"03ea6cb5-c882-497e-b913-600e95d95c94\") " Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.670330 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03ea6cb5-c882-497e-b913-600e95d95c94-kube-api-access-gbwck" (OuterVolumeSpecName: "kube-api-access-gbwck") pod "03ea6cb5-c882-497e-b913-600e95d95c94" (UID: "03ea6cb5-c882-497e-b913-600e95d95c94"). InnerVolumeSpecName "kube-api-access-gbwck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.716194 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-config" (OuterVolumeSpecName: "config") pod "03ea6cb5-c882-497e-b913-600e95d95c94" (UID: "03ea6cb5-c882-497e-b913-600e95d95c94"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.716523 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "03ea6cb5-c882-497e-b913-600e95d95c94" (UID: "03ea6cb5-c882-497e-b913-600e95d95c94"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.724581 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "03ea6cb5-c882-497e-b913-600e95d95c94" (UID: "03ea6cb5-c882-497e-b913-600e95d95c94"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.729517 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "03ea6cb5-c882-497e-b913-600e95d95c94" (UID: "03ea6cb5-c882-497e-b913-600e95d95c94"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.747021 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "03ea6cb5-c882-497e-b913-600e95d95c94" (UID: "03ea6cb5-c882-497e-b913-600e95d95c94"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.753600 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "03ea6cb5-c882-497e-b913-600e95d95c94" (UID: "03ea6cb5-c882-497e-b913-600e95d95c94"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.765867 4904 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.765913 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.765925 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.765938 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.765951 4904 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.765964 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbwck\" (UniqueName: \"kubernetes.io/projected/03ea6cb5-c882-497e-b913-600e95d95c94-kube-api-access-gbwck\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:38 crc kubenswrapper[4904]: I0223 10:29:38.765977 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03ea6cb5-c882-497e-b913-600e95d95c94-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:29:39 crc kubenswrapper[4904]: I0223 10:29:39.485783 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-4vj4n" Feb 23 10:29:39 crc kubenswrapper[4904]: I0223 10:29:39.522650 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-4vj4n"] Feb 23 10:29:39 crc kubenswrapper[4904]: I0223 10:29:39.534835 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-4vj4n"] Feb 23 10:29:41 crc kubenswrapper[4904]: I0223 10:29:41.268139 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03ea6cb5-c882-497e-b913-600e95d95c94" path="/var/lib/kubelet/pods/03ea6cb5-c882-497e-b913-600e95d95c94/volumes" Feb 23 10:29:42 crc kubenswrapper[4904]: I0223 10:29:42.507705 4904 scope.go:117] "RemoveContainer" containerID="704260c70f4076d1fec4f980f7f30afab803ee1bb524a3f9e9eac5ade1b470c1" Feb 23 10:29:51 crc kubenswrapper[4904]: I0223 10:29:51.648915 4904 generic.go:334] "Generic (PLEG): container finished" podID="8d3577f6-3d30-4a6c-9485-0429f1eb87f5" containerID="9446454bb2dd663d41a02f68a77a5ca4821383e93188b9eec50aad2ce0fa6404" exitCode=0 Feb 23 10:29:51 crc kubenswrapper[4904]: I0223 10:29:51.649007 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8d3577f6-3d30-4a6c-9485-0429f1eb87f5","Type":"ContainerDied","Data":"9446454bb2dd663d41a02f68a77a5ca4821383e93188b9eec50aad2ce0fa6404"} Feb 23 10:29:52 crc kubenswrapper[4904]: I0223 10:29:52.663738 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8d3577f6-3d30-4a6c-9485-0429f1eb87f5","Type":"ContainerStarted","Data":"ee5baf9de619bca0bd549168af2f4231be7925ed7b5460f5152cb86ff42f232b"} Feb 23 10:29:52 crc kubenswrapper[4904]: I0223 10:29:52.664410 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 23 10:29:52 crc kubenswrapper[4904]: I0223 10:29:52.704080 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.704048856 podStartE2EDuration="37.704048856s" podCreationTimestamp="2026-02-23 10:29:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:29:52.69506409 +0000 UTC m=+1426.115437623" watchObservedRunningTime="2026-02-23 10:29:52.704048856 +0000 UTC m=+1426.124422399" Feb 23 10:29:53 crc kubenswrapper[4904]: I0223 10:29:53.673656 4904 generic.go:334] "Generic (PLEG): container finished" podID="f66afb49-9f1b-43ea-966f-8aaf91eea84a" containerID="c9449273415eec7d767d34761320af12ba52204331906e8aecfcd14dfb4cdba4" exitCode=0 Feb 23 10:29:53 crc kubenswrapper[4904]: I0223 10:29:53.675365 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f66afb49-9f1b-43ea-966f-8aaf91eea84a","Type":"ContainerDied","Data":"c9449273415eec7d767d34761320af12ba52204331906e8aecfcd14dfb4cdba4"} Feb 23 10:29:54 crc kubenswrapper[4904]: I0223 10:29:54.688334 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f66afb49-9f1b-43ea-966f-8aaf91eea84a","Type":"ContainerStarted","Data":"9ea3b797f5becffbd4a42df97929fbaa6ae13ac0b3dd06c6257f8375775e9715"} Feb 23 10:29:54 crc kubenswrapper[4904]: I0223 10:29:54.689019 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:29:54 crc kubenswrapper[4904]: I0223 10:29:54.732178 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.732147686 podStartE2EDuration="38.732147686s" podCreationTimestamp="2026-02-23 10:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:29:54.722537012 +0000 UTC m=+1428.142910535" watchObservedRunningTime="2026-02-23 10:29:54.732147686 +0000 UTC m=+1428.152521209" Feb 23 10:29:55 crc kubenswrapper[4904]: I0223 10:29:55.933694 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76"] Feb 23 10:29:55 crc kubenswrapper[4904]: E0223 10:29:55.934215 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ea6cb5-c882-497e-b913-600e95d95c94" containerName="init" Feb 23 10:29:55 crc kubenswrapper[4904]: I0223 10:29:55.934231 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ea6cb5-c882-497e-b913-600e95d95c94" containerName="init" Feb 23 10:29:55 crc kubenswrapper[4904]: E0223 10:29:55.934247 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47101850-5ecb-4158-8a6e-2c4541850b48" containerName="init" Feb 23 10:29:55 crc kubenswrapper[4904]: I0223 10:29:55.934253 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="47101850-5ecb-4158-8a6e-2c4541850b48" containerName="init" Feb 23 10:29:55 crc kubenswrapper[4904]: E0223 10:29:55.934269 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47101850-5ecb-4158-8a6e-2c4541850b48" containerName="dnsmasq-dns" Feb 23 10:29:55 crc kubenswrapper[4904]: I0223 10:29:55.934275 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="47101850-5ecb-4158-8a6e-2c4541850b48" containerName="dnsmasq-dns" Feb 23 10:29:55 crc kubenswrapper[4904]: E0223 10:29:55.934287 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ea6cb5-c882-497e-b913-600e95d95c94" containerName="dnsmasq-dns" Feb 23 10:29:55 crc kubenswrapper[4904]: I0223 10:29:55.934292 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ea6cb5-c882-497e-b913-600e95d95c94" containerName="dnsmasq-dns" Feb 23 10:29:55 crc kubenswrapper[4904]: I0223 10:29:55.934472 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ea6cb5-c882-497e-b913-600e95d95c94" containerName="dnsmasq-dns" Feb 23 10:29:55 crc kubenswrapper[4904]: I0223 10:29:55.934483 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="47101850-5ecb-4158-8a6e-2c4541850b48" containerName="dnsmasq-dns" Feb 23 10:29:55 crc kubenswrapper[4904]: I0223 10:29:55.935165 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76" Feb 23 10:29:55 crc kubenswrapper[4904]: I0223 10:29:55.937883 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-c72bm" Feb 23 10:29:55 crc kubenswrapper[4904]: I0223 10:29:55.937920 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 10:29:55 crc kubenswrapper[4904]: I0223 10:29:55.938270 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 10:29:55 crc kubenswrapper[4904]: I0223 10:29:55.947459 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76"] Feb 23 10:29:55 crc kubenswrapper[4904]: I0223 10:29:55.953408 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 10:29:55 crc kubenswrapper[4904]: I0223 10:29:55.971930 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc091bb-7179-4ff4-ad67-3134e8143c90-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76\" (UID: \"acc091bb-7179-4ff4-ad67-3134e8143c90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76" Feb 23 10:29:55 crc kubenswrapper[4904]: I0223 10:29:55.971986 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/acc091bb-7179-4ff4-ad67-3134e8143c90-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76\" (UID: \"acc091bb-7179-4ff4-ad67-3134e8143c90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76" Feb 23 10:29:55 crc kubenswrapper[4904]: I0223 10:29:55.972122 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrz99\" (UniqueName: \"kubernetes.io/projected/acc091bb-7179-4ff4-ad67-3134e8143c90-kube-api-access-mrz99\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76\" (UID: \"acc091bb-7179-4ff4-ad67-3134e8143c90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76" Feb 23 10:29:55 crc kubenswrapper[4904]: I0223 10:29:55.972187 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acc091bb-7179-4ff4-ad67-3134e8143c90-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76\" (UID: \"acc091bb-7179-4ff4-ad67-3134e8143c90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76" Feb 23 10:29:56 crc kubenswrapper[4904]: I0223 10:29:56.074981 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc091bb-7179-4ff4-ad67-3134e8143c90-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76\" (UID: \"acc091bb-7179-4ff4-ad67-3134e8143c90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76" Feb 23 10:29:56 crc kubenswrapper[4904]: I0223 10:29:56.075033 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/acc091bb-7179-4ff4-ad67-3134e8143c90-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76\" (UID: \"acc091bb-7179-4ff4-ad67-3134e8143c90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76" Feb 23 10:29:56 crc kubenswrapper[4904]: I0223 10:29:56.075083 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrz99\" (UniqueName: \"kubernetes.io/projected/acc091bb-7179-4ff4-ad67-3134e8143c90-kube-api-access-mrz99\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76\" (UID: \"acc091bb-7179-4ff4-ad67-3134e8143c90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76" Feb 23 10:29:56 crc kubenswrapper[4904]: I0223 10:29:56.075126 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acc091bb-7179-4ff4-ad67-3134e8143c90-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76\" (UID: \"acc091bb-7179-4ff4-ad67-3134e8143c90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76" Feb 23 10:29:56 crc kubenswrapper[4904]: I0223 10:29:56.080300 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acc091bb-7179-4ff4-ad67-3134e8143c90-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76\" (UID: \"acc091bb-7179-4ff4-ad67-3134e8143c90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76" Feb 23 10:29:56 crc kubenswrapper[4904]: I0223 10:29:56.081209 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc091bb-7179-4ff4-ad67-3134e8143c90-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76\" (UID: \"acc091bb-7179-4ff4-ad67-3134e8143c90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76" Feb 23 10:29:56 crc kubenswrapper[4904]: I0223 10:29:56.088788 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/acc091bb-7179-4ff4-ad67-3134e8143c90-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76\" (UID: \"acc091bb-7179-4ff4-ad67-3134e8143c90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76" Feb 23 10:29:56 crc kubenswrapper[4904]: I0223 10:29:56.103550 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrz99\" (UniqueName: \"kubernetes.io/projected/acc091bb-7179-4ff4-ad67-3134e8143c90-kube-api-access-mrz99\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76\" (UID: \"acc091bb-7179-4ff4-ad67-3134e8143c90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76" Feb 23 10:29:56 crc kubenswrapper[4904]: I0223 10:29:56.278139 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76" Feb 23 10:29:56 crc kubenswrapper[4904]: I0223 10:29:56.930959 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76"] Feb 23 10:29:57 crc kubenswrapper[4904]: I0223 10:29:57.716708 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76" event={"ID":"acc091bb-7179-4ff4-ad67-3134e8143c90","Type":"ContainerStarted","Data":"37f536babdc3ff35e7406dbf20e3274c281592c4ea48758067be7faedfc9a19c"} Feb 23 10:30:00 crc kubenswrapper[4904]: I0223 10:30:00.155041 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530710-4cvl8"] Feb 23 10:30:00 crc kubenswrapper[4904]: I0223 10:30:00.158563 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530710-4cvl8" Feb 23 10:30:00 crc kubenswrapper[4904]: I0223 10:30:00.165757 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 10:30:00 crc kubenswrapper[4904]: I0223 10:30:00.166481 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 10:30:00 crc kubenswrapper[4904]: I0223 10:30:00.168242 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530710-4cvl8"] Feb 23 10:30:00 crc kubenswrapper[4904]: I0223 10:30:00.179222 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a290347-5e68-4afc-b963-25404ea29fef-config-volume\") pod \"collect-profiles-29530710-4cvl8\" (UID: \"2a290347-5e68-4afc-b963-25404ea29fef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530710-4cvl8" Feb 23 10:30:00 crc kubenswrapper[4904]: I0223 10:30:00.179341 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg8z8\" (UniqueName: \"kubernetes.io/projected/2a290347-5e68-4afc-b963-25404ea29fef-kube-api-access-fg8z8\") pod \"collect-profiles-29530710-4cvl8\" (UID: \"2a290347-5e68-4afc-b963-25404ea29fef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530710-4cvl8" Feb 23 10:30:00 crc kubenswrapper[4904]: I0223 10:30:00.179597 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a290347-5e68-4afc-b963-25404ea29fef-secret-volume\") pod \"collect-profiles-29530710-4cvl8\" (UID: \"2a290347-5e68-4afc-b963-25404ea29fef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530710-4cvl8" Feb 23 10:30:00 crc kubenswrapper[4904]: I0223 10:30:00.284000 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a290347-5e68-4afc-b963-25404ea29fef-config-volume\") pod \"collect-profiles-29530710-4cvl8\" (UID: \"2a290347-5e68-4afc-b963-25404ea29fef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530710-4cvl8" Feb 23 10:30:00 crc kubenswrapper[4904]: I0223 10:30:00.284759 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg8z8\" (UniqueName: \"kubernetes.io/projected/2a290347-5e68-4afc-b963-25404ea29fef-kube-api-access-fg8z8\") pod \"collect-profiles-29530710-4cvl8\" (UID: \"2a290347-5e68-4afc-b963-25404ea29fef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530710-4cvl8" Feb 23 10:30:00 crc kubenswrapper[4904]: I0223 10:30:00.284680 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a290347-5e68-4afc-b963-25404ea29fef-config-volume\") pod \"collect-profiles-29530710-4cvl8\" (UID: \"2a290347-5e68-4afc-b963-25404ea29fef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530710-4cvl8" Feb 23 10:30:00 crc kubenswrapper[4904]: I0223 10:30:00.285350 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a290347-5e68-4afc-b963-25404ea29fef-secret-volume\") pod \"collect-profiles-29530710-4cvl8\" (UID: \"2a290347-5e68-4afc-b963-25404ea29fef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530710-4cvl8" Feb 23 10:30:00 crc kubenswrapper[4904]: I0223 10:30:00.291832 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a290347-5e68-4afc-b963-25404ea29fef-secret-volume\") pod \"collect-profiles-29530710-4cvl8\" (UID: \"2a290347-5e68-4afc-b963-25404ea29fef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530710-4cvl8" Feb 23 10:30:00 crc kubenswrapper[4904]: I0223 10:30:00.301875 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg8z8\" (UniqueName: \"kubernetes.io/projected/2a290347-5e68-4afc-b963-25404ea29fef-kube-api-access-fg8z8\") pod \"collect-profiles-29530710-4cvl8\" (UID: \"2a290347-5e68-4afc-b963-25404ea29fef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530710-4cvl8" Feb 23 10:30:00 crc kubenswrapper[4904]: I0223 10:30:00.500957 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530710-4cvl8" Feb 23 10:30:01 crc kubenswrapper[4904]: I0223 10:30:01.017154 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530710-4cvl8"] Feb 23 10:30:01 crc kubenswrapper[4904]: W0223 10:30:01.032110 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a290347_5e68_4afc_b963_25404ea29fef.slice/crio-52193adcdfb614713c3e1585dd18382e165021e050c791134e1e8887d415b480 WatchSource:0}: Error finding container 52193adcdfb614713c3e1585dd18382e165021e050c791134e1e8887d415b480: Status 404 returned error can't find the container with id 52193adcdfb614713c3e1585dd18382e165021e050c791134e1e8887d415b480 Feb 23 10:30:01 crc kubenswrapper[4904]: I0223 10:30:01.777359 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530710-4cvl8" event={"ID":"2a290347-5e68-4afc-b963-25404ea29fef","Type":"ContainerStarted","Data":"2980df3aec0c1757ce98319bcc166a182bc963ac12a97c3da265b4662cdefef2"} Feb 23 10:30:01 crc kubenswrapper[4904]: I0223 10:30:01.777726 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530710-4cvl8" event={"ID":"2a290347-5e68-4afc-b963-25404ea29fef","Type":"ContainerStarted","Data":"52193adcdfb614713c3e1585dd18382e165021e050c791134e1e8887d415b480"} Feb 23 10:30:01 crc kubenswrapper[4904]: I0223 10:30:01.796451 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29530710-4cvl8" podStartSLOduration=1.7964309090000001 podStartE2EDuration="1.796430909s" podCreationTimestamp="2026-02-23 10:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:30:01.792311671 +0000 UTC m=+1435.212685194" watchObservedRunningTime="2026-02-23 10:30:01.796430909 +0000 UTC m=+1435.216804422" Feb 23 10:30:02 crc kubenswrapper[4904]: I0223 10:30:02.789231 4904 generic.go:334] "Generic (PLEG): container finished" podID="2a290347-5e68-4afc-b963-25404ea29fef" containerID="2980df3aec0c1757ce98319bcc166a182bc963ac12a97c3da265b4662cdefef2" exitCode=0 Feb 23 10:30:02 crc kubenswrapper[4904]: I0223 10:30:02.789304 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530710-4cvl8" event={"ID":"2a290347-5e68-4afc-b963-25404ea29fef","Type":"ContainerDied","Data":"2980df3aec0c1757ce98319bcc166a182bc963ac12a97c3da265b4662cdefef2"} Feb 23 10:30:05 crc kubenswrapper[4904]: I0223 10:30:05.916070 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 23 10:30:06 crc kubenswrapper[4904]: I0223 10:30:06.983423 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 23 10:30:09 crc kubenswrapper[4904]: I0223 10:30:09.875468 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530710-4cvl8" event={"ID":"2a290347-5e68-4afc-b963-25404ea29fef","Type":"ContainerDied","Data":"52193adcdfb614713c3e1585dd18382e165021e050c791134e1e8887d415b480"} Feb 23 10:30:09 crc kubenswrapper[4904]: I0223 10:30:09.875783 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52193adcdfb614713c3e1585dd18382e165021e050c791134e1e8887d415b480" Feb 23 10:30:09 crc kubenswrapper[4904]: I0223 10:30:09.923111 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 10:30:09 crc kubenswrapper[4904]: I0223 10:30:09.944187 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530710-4cvl8" Feb 23 10:30:10 crc kubenswrapper[4904]: I0223 10:30:10.051529 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a290347-5e68-4afc-b963-25404ea29fef-config-volume\") pod \"2a290347-5e68-4afc-b963-25404ea29fef\" (UID: \"2a290347-5e68-4afc-b963-25404ea29fef\") " Feb 23 10:30:10 crc kubenswrapper[4904]: I0223 10:30:10.051696 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg8z8\" (UniqueName: \"kubernetes.io/projected/2a290347-5e68-4afc-b963-25404ea29fef-kube-api-access-fg8z8\") pod \"2a290347-5e68-4afc-b963-25404ea29fef\" (UID: \"2a290347-5e68-4afc-b963-25404ea29fef\") " Feb 23 10:30:10 crc kubenswrapper[4904]: I0223 10:30:10.051801 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a290347-5e68-4afc-b963-25404ea29fef-secret-volume\") pod \"2a290347-5e68-4afc-b963-25404ea29fef\" (UID: \"2a290347-5e68-4afc-b963-25404ea29fef\") " Feb 23 10:30:10 crc kubenswrapper[4904]: I0223 10:30:10.053221 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a290347-5e68-4afc-b963-25404ea29fef-config-volume" (OuterVolumeSpecName: "config-volume") pod "2a290347-5e68-4afc-b963-25404ea29fef" (UID: "2a290347-5e68-4afc-b963-25404ea29fef"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:30:10 crc kubenswrapper[4904]: I0223 10:30:10.059254 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a290347-5e68-4afc-b963-25404ea29fef-kube-api-access-fg8z8" (OuterVolumeSpecName: "kube-api-access-fg8z8") pod "2a290347-5e68-4afc-b963-25404ea29fef" (UID: "2a290347-5e68-4afc-b963-25404ea29fef"). InnerVolumeSpecName "kube-api-access-fg8z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:30:10 crc kubenswrapper[4904]: I0223 10:30:10.059875 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a290347-5e68-4afc-b963-25404ea29fef-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2a290347-5e68-4afc-b963-25404ea29fef" (UID: "2a290347-5e68-4afc-b963-25404ea29fef"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:30:10 crc kubenswrapper[4904]: I0223 10:30:10.154997 4904 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a290347-5e68-4afc-b963-25404ea29fef-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 10:30:10 crc kubenswrapper[4904]: I0223 10:30:10.155032 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg8z8\" (UniqueName: \"kubernetes.io/projected/2a290347-5e68-4afc-b963-25404ea29fef-kube-api-access-fg8z8\") on node \"crc\" DevicePath \"\"" Feb 23 10:30:10 crc kubenswrapper[4904]: I0223 10:30:10.155045 4904 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a290347-5e68-4afc-b963-25404ea29fef-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 10:30:10 crc kubenswrapper[4904]: I0223 10:30:10.892502 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530710-4cvl8" Feb 23 10:30:10 crc kubenswrapper[4904]: I0223 10:30:10.897861 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76" event={"ID":"acc091bb-7179-4ff4-ad67-3134e8143c90","Type":"ContainerStarted","Data":"ea97efdeeffeac3021441da73ef2ba707b1a51d38316febbfbdba9b291749b2b"} Feb 23 10:30:10 crc kubenswrapper[4904]: I0223 10:30:10.921619 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76" podStartSLOduration=2.944616338 podStartE2EDuration="15.921591633s" podCreationTimestamp="2026-02-23 10:29:55 +0000 UTC" firstStartedPulling="2026-02-23 10:29:56.942547554 +0000 UTC m=+1430.362921067" lastFinishedPulling="2026-02-23 10:30:09.919522809 +0000 UTC m=+1443.339896362" observedRunningTime="2026-02-23 10:30:10.919604096 +0000 UTC m=+1444.339977609" watchObservedRunningTime="2026-02-23 10:30:10.921591633 +0000 UTC m=+1444.341965166" Feb 23 10:30:21 crc kubenswrapper[4904]: I0223 10:30:21.067349 4904 generic.go:334] "Generic (PLEG): container finished" podID="acc091bb-7179-4ff4-ad67-3134e8143c90" containerID="ea97efdeeffeac3021441da73ef2ba707b1a51d38316febbfbdba9b291749b2b" exitCode=0 Feb 23 10:30:21 crc kubenswrapper[4904]: I0223 10:30:21.067408 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76" event={"ID":"acc091bb-7179-4ff4-ad67-3134e8143c90","Type":"ContainerDied","Data":"ea97efdeeffeac3021441da73ef2ba707b1a51d38316febbfbdba9b291749b2b"} Feb 23 10:30:22 crc kubenswrapper[4904]: I0223 10:30:22.711880 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76" Feb 23 10:30:22 crc kubenswrapper[4904]: I0223 10:30:22.843039 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc091bb-7179-4ff4-ad67-3134e8143c90-repo-setup-combined-ca-bundle\") pod \"acc091bb-7179-4ff4-ad67-3134e8143c90\" (UID: \"acc091bb-7179-4ff4-ad67-3134e8143c90\") " Feb 23 10:30:22 crc kubenswrapper[4904]: I0223 10:30:22.843161 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/acc091bb-7179-4ff4-ad67-3134e8143c90-ssh-key-openstack-edpm-ipam\") pod \"acc091bb-7179-4ff4-ad67-3134e8143c90\" (UID: \"acc091bb-7179-4ff4-ad67-3134e8143c90\") " Feb 23 10:30:22 crc kubenswrapper[4904]: I0223 10:30:22.843238 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrz99\" (UniqueName: \"kubernetes.io/projected/acc091bb-7179-4ff4-ad67-3134e8143c90-kube-api-access-mrz99\") pod \"acc091bb-7179-4ff4-ad67-3134e8143c90\" (UID: \"acc091bb-7179-4ff4-ad67-3134e8143c90\") " Feb 23 10:30:22 crc kubenswrapper[4904]: I0223 10:30:22.843474 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acc091bb-7179-4ff4-ad67-3134e8143c90-inventory\") pod \"acc091bb-7179-4ff4-ad67-3134e8143c90\" (UID: \"acc091bb-7179-4ff4-ad67-3134e8143c90\") " Feb 23 10:30:22 crc kubenswrapper[4904]: I0223 10:30:22.848890 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acc091bb-7179-4ff4-ad67-3134e8143c90-kube-api-access-mrz99" (OuterVolumeSpecName: "kube-api-access-mrz99") pod "acc091bb-7179-4ff4-ad67-3134e8143c90" (UID: "acc091bb-7179-4ff4-ad67-3134e8143c90"). InnerVolumeSpecName "kube-api-access-mrz99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:30:22 crc kubenswrapper[4904]: I0223 10:30:22.859734 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acc091bb-7179-4ff4-ad67-3134e8143c90-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "acc091bb-7179-4ff4-ad67-3134e8143c90" (UID: "acc091bb-7179-4ff4-ad67-3134e8143c90"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:30:22 crc kubenswrapper[4904]: I0223 10:30:22.874339 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acc091bb-7179-4ff4-ad67-3134e8143c90-inventory" (OuterVolumeSpecName: "inventory") pod "acc091bb-7179-4ff4-ad67-3134e8143c90" (UID: "acc091bb-7179-4ff4-ad67-3134e8143c90"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:30:22 crc kubenswrapper[4904]: I0223 10:30:22.885525 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acc091bb-7179-4ff4-ad67-3134e8143c90-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "acc091bb-7179-4ff4-ad67-3134e8143c90" (UID: "acc091bb-7179-4ff4-ad67-3134e8143c90"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:30:22 crc kubenswrapper[4904]: I0223 10:30:22.946177 4904 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc091bb-7179-4ff4-ad67-3134e8143c90-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:30:22 crc kubenswrapper[4904]: I0223 10:30:22.946223 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/acc091bb-7179-4ff4-ad67-3134e8143c90-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 10:30:22 crc kubenswrapper[4904]: I0223 10:30:22.946241 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrz99\" (UniqueName: \"kubernetes.io/projected/acc091bb-7179-4ff4-ad67-3134e8143c90-kube-api-access-mrz99\") on node \"crc\" DevicePath \"\"" Feb 23 10:30:22 crc kubenswrapper[4904]: I0223 10:30:22.946256 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acc091bb-7179-4ff4-ad67-3134e8143c90-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 10:30:23 crc kubenswrapper[4904]: I0223 10:30:23.099951 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76" event={"ID":"acc091bb-7179-4ff4-ad67-3134e8143c90","Type":"ContainerDied","Data":"37f536babdc3ff35e7406dbf20e3274c281592c4ea48758067be7faedfc9a19c"} Feb 23 10:30:23 crc kubenswrapper[4904]: I0223 10:30:23.099995 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37f536babdc3ff35e7406dbf20e3274c281592c4ea48758067be7faedfc9a19c" Feb 23 10:30:23 crc kubenswrapper[4904]: I0223 10:30:23.100070 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76" Feb 23 10:30:23 crc kubenswrapper[4904]: I0223 10:30:23.204645 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xd75w"] Feb 23 10:30:23 crc kubenswrapper[4904]: E0223 10:30:23.205065 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a290347-5e68-4afc-b963-25404ea29fef" containerName="collect-profiles" Feb 23 10:30:23 crc kubenswrapper[4904]: I0223 10:30:23.205084 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a290347-5e68-4afc-b963-25404ea29fef" containerName="collect-profiles" Feb 23 10:30:23 crc kubenswrapper[4904]: E0223 10:30:23.205119 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acc091bb-7179-4ff4-ad67-3134e8143c90" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 23 10:30:23 crc kubenswrapper[4904]: I0223 10:30:23.205127 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="acc091bb-7179-4ff4-ad67-3134e8143c90" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 23 10:30:23 crc kubenswrapper[4904]: I0223 10:30:23.205288 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="acc091bb-7179-4ff4-ad67-3134e8143c90" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 23 10:30:23 crc kubenswrapper[4904]: I0223 10:30:23.205315 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a290347-5e68-4afc-b963-25404ea29fef" containerName="collect-profiles" Feb 23 10:30:23 crc kubenswrapper[4904]: I0223 10:30:23.205973 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xd75w" Feb 23 10:30:23 crc kubenswrapper[4904]: I0223 10:30:23.208333 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 10:30:23 crc kubenswrapper[4904]: I0223 10:30:23.208439 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-c72bm" Feb 23 10:30:23 crc kubenswrapper[4904]: I0223 10:30:23.212291 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 10:30:23 crc kubenswrapper[4904]: I0223 10:30:23.218857 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 10:30:23 crc kubenswrapper[4904]: I0223 10:30:23.231534 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xd75w"] Feb 23 10:30:23 crc kubenswrapper[4904]: I0223 10:30:23.252808 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8987115a-6e48-49c4-bf47-aad12518b1d4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xd75w\" (UID: \"8987115a-6e48-49c4-bf47-aad12518b1d4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xd75w" Feb 23 10:30:23 crc kubenswrapper[4904]: I0223 10:30:23.252852 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8987115a-6e48-49c4-bf47-aad12518b1d4-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xd75w\" (UID: \"8987115a-6e48-49c4-bf47-aad12518b1d4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xd75w" Feb 23 10:30:23 crc kubenswrapper[4904]: I0223 10:30:23.252911 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-498cq\" (UniqueName: \"kubernetes.io/projected/8987115a-6e48-49c4-bf47-aad12518b1d4-kube-api-access-498cq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xd75w\" (UID: \"8987115a-6e48-49c4-bf47-aad12518b1d4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xd75w" Feb 23 10:30:23 crc kubenswrapper[4904]: I0223 10:30:23.355887 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8987115a-6e48-49c4-bf47-aad12518b1d4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xd75w\" (UID: \"8987115a-6e48-49c4-bf47-aad12518b1d4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xd75w" Feb 23 10:30:23 crc kubenswrapper[4904]: I0223 10:30:23.356294 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8987115a-6e48-49c4-bf47-aad12518b1d4-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xd75w\" (UID: \"8987115a-6e48-49c4-bf47-aad12518b1d4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xd75w" Feb 23 10:30:23 crc kubenswrapper[4904]: I0223 10:30:23.356457 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-498cq\" (UniqueName: \"kubernetes.io/projected/8987115a-6e48-49c4-bf47-aad12518b1d4-kube-api-access-498cq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xd75w\" (UID: \"8987115a-6e48-49c4-bf47-aad12518b1d4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xd75w" Feb 23 10:30:23 crc kubenswrapper[4904]: I0223 10:30:23.361758 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8987115a-6e48-49c4-bf47-aad12518b1d4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xd75w\" (UID: \"8987115a-6e48-49c4-bf47-aad12518b1d4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xd75w" Feb 23 10:30:23 crc kubenswrapper[4904]: I0223 10:30:23.367978 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8987115a-6e48-49c4-bf47-aad12518b1d4-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xd75w\" (UID: \"8987115a-6e48-49c4-bf47-aad12518b1d4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xd75w" Feb 23 10:30:23 crc kubenswrapper[4904]: I0223 10:30:23.417771 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-498cq\" (UniqueName: \"kubernetes.io/projected/8987115a-6e48-49c4-bf47-aad12518b1d4-kube-api-access-498cq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xd75w\" (UID: \"8987115a-6e48-49c4-bf47-aad12518b1d4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xd75w" Feb 23 10:30:23 crc kubenswrapper[4904]: I0223 10:30:23.530321 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xd75w" Feb 23 10:30:24 crc kubenswrapper[4904]: I0223 10:30:24.162389 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xd75w"] Feb 23 10:30:24 crc kubenswrapper[4904]: W0223 10:30:24.166961 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8987115a_6e48_49c4_bf47_aad12518b1d4.slice/crio-6030510084bd13fef066c0fa24013cdc857691ed4f5255a47387c28f2730474f WatchSource:0}: Error finding container 6030510084bd13fef066c0fa24013cdc857691ed4f5255a47387c28f2730474f: Status 404 returned error can't find the container with id 6030510084bd13fef066c0fa24013cdc857691ed4f5255a47387c28f2730474f Feb 23 10:30:25 crc kubenswrapper[4904]: I0223 10:30:25.140630 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xd75w" event={"ID":"8987115a-6e48-49c4-bf47-aad12518b1d4","Type":"ContainerStarted","Data":"41967b39e2e863072390d8664034ab06bbf8d637448476c6c35afc499e75625d"} Feb 23 10:30:25 crc kubenswrapper[4904]: I0223 10:30:25.141129 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xd75w" event={"ID":"8987115a-6e48-49c4-bf47-aad12518b1d4","Type":"ContainerStarted","Data":"6030510084bd13fef066c0fa24013cdc857691ed4f5255a47387c28f2730474f"} Feb 23 10:30:25 crc kubenswrapper[4904]: I0223 10:30:25.174389 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xd75w" podStartSLOduration=1.693456275 podStartE2EDuration="2.174361564s" podCreationTimestamp="2026-02-23 10:30:23 +0000 UTC" firstStartedPulling="2026-02-23 10:30:24.172513366 +0000 UTC m=+1457.592886889" lastFinishedPulling="2026-02-23 10:30:24.653418665 +0000 UTC m=+1458.073792178" observedRunningTime="2026-02-23 10:30:25.160492119 +0000 UTC m=+1458.580865632" watchObservedRunningTime="2026-02-23 10:30:25.174361564 +0000 UTC m=+1458.594735107" Feb 23 10:30:28 crc kubenswrapper[4904]: I0223 10:30:28.185353 4904 generic.go:334] "Generic (PLEG): container finished" podID="8987115a-6e48-49c4-bf47-aad12518b1d4" containerID="41967b39e2e863072390d8664034ab06bbf8d637448476c6c35afc499e75625d" exitCode=0 Feb 23 10:30:28 crc kubenswrapper[4904]: I0223 10:30:28.185504 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xd75w" event={"ID":"8987115a-6e48-49c4-bf47-aad12518b1d4","Type":"ContainerDied","Data":"41967b39e2e863072390d8664034ab06bbf8d637448476c6c35afc499e75625d"} Feb 23 10:30:29 crc kubenswrapper[4904]: I0223 10:30:29.641500 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xd75w" Feb 23 10:30:29 crc kubenswrapper[4904]: I0223 10:30:29.698215 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-498cq\" (UniqueName: \"kubernetes.io/projected/8987115a-6e48-49c4-bf47-aad12518b1d4-kube-api-access-498cq\") pod \"8987115a-6e48-49c4-bf47-aad12518b1d4\" (UID: \"8987115a-6e48-49c4-bf47-aad12518b1d4\") " Feb 23 10:30:29 crc kubenswrapper[4904]: I0223 10:30:29.698287 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8987115a-6e48-49c4-bf47-aad12518b1d4-ssh-key-openstack-edpm-ipam\") pod \"8987115a-6e48-49c4-bf47-aad12518b1d4\" (UID: \"8987115a-6e48-49c4-bf47-aad12518b1d4\") " Feb 23 10:30:29 crc kubenswrapper[4904]: I0223 10:30:29.698359 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8987115a-6e48-49c4-bf47-aad12518b1d4-inventory\") pod \"8987115a-6e48-49c4-bf47-aad12518b1d4\" (UID: \"8987115a-6e48-49c4-bf47-aad12518b1d4\") " Feb 23 10:30:29 crc kubenswrapper[4904]: I0223 10:30:29.705198 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8987115a-6e48-49c4-bf47-aad12518b1d4-kube-api-access-498cq" (OuterVolumeSpecName: "kube-api-access-498cq") pod "8987115a-6e48-49c4-bf47-aad12518b1d4" (UID: "8987115a-6e48-49c4-bf47-aad12518b1d4"). InnerVolumeSpecName "kube-api-access-498cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:30:29 crc kubenswrapper[4904]: I0223 10:30:29.732330 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8987115a-6e48-49c4-bf47-aad12518b1d4-inventory" (OuterVolumeSpecName: "inventory") pod "8987115a-6e48-49c4-bf47-aad12518b1d4" (UID: "8987115a-6e48-49c4-bf47-aad12518b1d4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:30:29 crc kubenswrapper[4904]: I0223 10:30:29.736663 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8987115a-6e48-49c4-bf47-aad12518b1d4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8987115a-6e48-49c4-bf47-aad12518b1d4" (UID: "8987115a-6e48-49c4-bf47-aad12518b1d4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:30:29 crc kubenswrapper[4904]: I0223 10:30:29.813409 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8987115a-6e48-49c4-bf47-aad12518b1d4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 10:30:29 crc kubenswrapper[4904]: I0223 10:30:29.813537 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8987115a-6e48-49c4-bf47-aad12518b1d4-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 10:30:29 crc kubenswrapper[4904]: I0223 10:30:29.813630 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-498cq\" (UniqueName: \"kubernetes.io/projected/8987115a-6e48-49c4-bf47-aad12518b1d4-kube-api-access-498cq\") on node \"crc\" DevicePath \"\"" Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.210492 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xd75w" event={"ID":"8987115a-6e48-49c4-bf47-aad12518b1d4","Type":"ContainerDied","Data":"6030510084bd13fef066c0fa24013cdc857691ed4f5255a47387c28f2730474f"} Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.210748 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6030510084bd13fef066c0fa24013cdc857691ed4f5255a47387c28f2730474f" Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.210553 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xd75w" Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.285757 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg"] Feb 23 10:30:30 crc kubenswrapper[4904]: E0223 10:30:30.286245 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8987115a-6e48-49c4-bf47-aad12518b1d4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.286268 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="8987115a-6e48-49c4-bf47-aad12518b1d4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.286552 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="8987115a-6e48-49c4-bf47-aad12518b1d4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.287364 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg" Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.289234 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-c72bm" Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.289493 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.289622 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.290803 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.300481 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg"] Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.323822 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da651589-0c88-4249-9dff-de1c46412cf5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg\" (UID: \"da651589-0c88-4249-9dff-de1c46412cf5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg" Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.323872 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da651589-0c88-4249-9dff-de1c46412cf5-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg\" (UID: \"da651589-0c88-4249-9dff-de1c46412cf5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg" Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.323923 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kb82\" (UniqueName: \"kubernetes.io/projected/da651589-0c88-4249-9dff-de1c46412cf5-kube-api-access-8kb82\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg\" (UID: \"da651589-0c88-4249-9dff-de1c46412cf5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg" Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.324034 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da651589-0c88-4249-9dff-de1c46412cf5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg\" (UID: \"da651589-0c88-4249-9dff-de1c46412cf5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg" Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.426424 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da651589-0c88-4249-9dff-de1c46412cf5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg\" (UID: \"da651589-0c88-4249-9dff-de1c46412cf5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg" Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.426523 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da651589-0c88-4249-9dff-de1c46412cf5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg\" (UID: \"da651589-0c88-4249-9dff-de1c46412cf5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg" Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.426574 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da651589-0c88-4249-9dff-de1c46412cf5-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg\" (UID: \"da651589-0c88-4249-9dff-de1c46412cf5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg" Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.426652 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kb82\" (UniqueName: \"kubernetes.io/projected/da651589-0c88-4249-9dff-de1c46412cf5-kube-api-access-8kb82\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg\" (UID: \"da651589-0c88-4249-9dff-de1c46412cf5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg" Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.430917 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da651589-0c88-4249-9dff-de1c46412cf5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg\" (UID: \"da651589-0c88-4249-9dff-de1c46412cf5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg" Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.431336 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da651589-0c88-4249-9dff-de1c46412cf5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg\" (UID: \"da651589-0c88-4249-9dff-de1c46412cf5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg" Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.436146 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da651589-0c88-4249-9dff-de1c46412cf5-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg\" (UID: \"da651589-0c88-4249-9dff-de1c46412cf5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg" Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.448601 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kb82\" (UniqueName: \"kubernetes.io/projected/da651589-0c88-4249-9dff-de1c46412cf5-kube-api-access-8kb82\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg\" (UID: \"da651589-0c88-4249-9dff-de1c46412cf5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg" Feb 23 10:30:30 crc kubenswrapper[4904]: I0223 10:30:30.602116 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg" Feb 23 10:30:31 crc kubenswrapper[4904]: W0223 10:30:31.004980 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda651589_0c88_4249_9dff_de1c46412cf5.slice/crio-795f547503ac19b0cae899a21c3939610aaf57e3c9c489e1004ee4f7eaf88d20 WatchSource:0}: Error finding container 795f547503ac19b0cae899a21c3939610aaf57e3c9c489e1004ee4f7eaf88d20: Status 404 returned error can't find the container with id 795f547503ac19b0cae899a21c3939610aaf57e3c9c489e1004ee4f7eaf88d20 Feb 23 10:30:31 crc kubenswrapper[4904]: I0223 10:30:31.006474 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg"] Feb 23 10:30:31 crc kubenswrapper[4904]: I0223 10:30:31.234230 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg" event={"ID":"da651589-0c88-4249-9dff-de1c46412cf5","Type":"ContainerStarted","Data":"795f547503ac19b0cae899a21c3939610aaf57e3c9c489e1004ee4f7eaf88d20"} Feb 23 10:30:32 crc kubenswrapper[4904]: I0223 10:30:32.256836 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg" event={"ID":"da651589-0c88-4249-9dff-de1c46412cf5","Type":"ContainerStarted","Data":"e7f0c223ec943c5cc61f0cfcd101ae8a3ed1d1cf04e8f847a9c60fbac7926c93"} Feb 23 10:30:32 crc kubenswrapper[4904]: I0223 10:30:32.296553 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg" podStartSLOduration=1.874078911 podStartE2EDuration="2.296523195s" podCreationTimestamp="2026-02-23 10:30:30 +0000 UTC" firstStartedPulling="2026-02-23 10:30:31.008131211 +0000 UTC m=+1464.428504724" lastFinishedPulling="2026-02-23 10:30:31.430575475 +0000 UTC m=+1464.850949008" observedRunningTime="2026-02-23 10:30:32.279457699 +0000 UTC m=+1465.699831252" watchObservedRunningTime="2026-02-23 10:30:32.296523195 +0000 UTC m=+1465.716896748" Feb 23 10:30:42 crc kubenswrapper[4904]: I0223 10:30:42.703087 4904 scope.go:117] "RemoveContainer" containerID="3db63b22dedbccef9d7abb9e3db114b1e5c000f0a0f753511d957c6bfa63fde9" Feb 23 10:30:42 crc kubenswrapper[4904]: I0223 10:30:42.767454 4904 scope.go:117] "RemoveContainer" containerID="b98c5f0ef3328ef89f141a647dfecd67b5b74ee464f3787b78566d9469ae1315" Feb 23 10:30:42 crc kubenswrapper[4904]: I0223 10:30:42.849173 4904 scope.go:117] "RemoveContainer" containerID="68999a16abbf0c15459e023d535ce66cdb82e11d6ed962593d1a83abfe8cc1c7" Feb 23 10:31:42 crc kubenswrapper[4904]: I0223 10:31:42.956143 4904 scope.go:117] "RemoveContainer" containerID="b1dfb5af29c04d61bf96636418bb1f8f5ecb431f5c4ae0d27af0d90dc125d3a7" Feb 23 10:31:42 crc kubenswrapper[4904]: I0223 10:31:42.995496 4904 scope.go:117] "RemoveContainer" containerID="67df6264f814a89e36e55ee7a656b37b50d14343e1844ee9024746c48e47f393" Feb 23 10:31:43 crc kubenswrapper[4904]: I0223 10:31:43.045872 4904 scope.go:117] "RemoveContainer" containerID="09302921b0f0aa31e810cb906e305c5c7bbc6134c1875b812be5ce7c2ebbba99" Feb 23 10:31:47 crc kubenswrapper[4904]: I0223 10:31:47.397667 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:31:47 crc kubenswrapper[4904]: I0223 10:31:47.399269 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:32:17 crc kubenswrapper[4904]: I0223 10:32:17.397907 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:32:17 crc kubenswrapper[4904]: I0223 10:32:17.398510 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:32:18 crc kubenswrapper[4904]: I0223 10:32:18.919288 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t45h7"] Feb 23 10:32:18 crc kubenswrapper[4904]: I0223 10:32:18.922345 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t45h7" Feb 23 10:32:18 crc kubenswrapper[4904]: I0223 10:32:18.940466 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t45h7"] Feb 23 10:32:19 crc kubenswrapper[4904]: I0223 10:32:19.103469 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9265564f-d316-4af9-aee7-05fef3748cc6-utilities\") pod \"redhat-operators-t45h7\" (UID: \"9265564f-d316-4af9-aee7-05fef3748cc6\") " pod="openshift-marketplace/redhat-operators-t45h7" Feb 23 10:32:19 crc kubenswrapper[4904]: I0223 10:32:19.103537 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pxhm\" (UniqueName: \"kubernetes.io/projected/9265564f-d316-4af9-aee7-05fef3748cc6-kube-api-access-5pxhm\") pod \"redhat-operators-t45h7\" (UID: \"9265564f-d316-4af9-aee7-05fef3748cc6\") " pod="openshift-marketplace/redhat-operators-t45h7" Feb 23 10:32:19 crc kubenswrapper[4904]: I0223 10:32:19.103775 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9265564f-d316-4af9-aee7-05fef3748cc6-catalog-content\") pod \"redhat-operators-t45h7\" (UID: \"9265564f-d316-4af9-aee7-05fef3748cc6\") " pod="openshift-marketplace/redhat-operators-t45h7" Feb 23 10:32:19 crc kubenswrapper[4904]: I0223 10:32:19.205882 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9265564f-d316-4af9-aee7-05fef3748cc6-catalog-content\") pod \"redhat-operators-t45h7\" (UID: \"9265564f-d316-4af9-aee7-05fef3748cc6\") " pod="openshift-marketplace/redhat-operators-t45h7" Feb 23 10:32:19 crc kubenswrapper[4904]: I0223 10:32:19.206077 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9265564f-d316-4af9-aee7-05fef3748cc6-utilities\") pod \"redhat-operators-t45h7\" (UID: \"9265564f-d316-4af9-aee7-05fef3748cc6\") " pod="openshift-marketplace/redhat-operators-t45h7" Feb 23 10:32:19 crc kubenswrapper[4904]: I0223 10:32:19.206102 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pxhm\" (UniqueName: \"kubernetes.io/projected/9265564f-d316-4af9-aee7-05fef3748cc6-kube-api-access-5pxhm\") pod \"redhat-operators-t45h7\" (UID: \"9265564f-d316-4af9-aee7-05fef3748cc6\") " pod="openshift-marketplace/redhat-operators-t45h7" Feb 23 10:32:19 crc kubenswrapper[4904]: I0223 10:32:19.206535 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9265564f-d316-4af9-aee7-05fef3748cc6-catalog-content\") pod \"redhat-operators-t45h7\" (UID: \"9265564f-d316-4af9-aee7-05fef3748cc6\") " pod="openshift-marketplace/redhat-operators-t45h7" Feb 23 10:32:19 crc kubenswrapper[4904]: I0223 10:32:19.206602 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9265564f-d316-4af9-aee7-05fef3748cc6-utilities\") pod \"redhat-operators-t45h7\" (UID: \"9265564f-d316-4af9-aee7-05fef3748cc6\") " pod="openshift-marketplace/redhat-operators-t45h7" Feb 23 10:32:19 crc kubenswrapper[4904]: I0223 10:32:19.235519 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pxhm\" (UniqueName: \"kubernetes.io/projected/9265564f-d316-4af9-aee7-05fef3748cc6-kube-api-access-5pxhm\") pod \"redhat-operators-t45h7\" (UID: \"9265564f-d316-4af9-aee7-05fef3748cc6\") " pod="openshift-marketplace/redhat-operators-t45h7" Feb 23 10:32:19 crc kubenswrapper[4904]: I0223 10:32:19.246348 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t45h7" Feb 23 10:32:19 crc kubenswrapper[4904]: I0223 10:32:19.713554 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t45h7"] Feb 23 10:32:20 crc kubenswrapper[4904]: I0223 10:32:20.079250 4904 generic.go:334] "Generic (PLEG): container finished" podID="9265564f-d316-4af9-aee7-05fef3748cc6" containerID="a141285d63200feeb1857f5037044273b9c28a09e8d870ed1bfa7c8c14662b1e" exitCode=0 Feb 23 10:32:20 crc kubenswrapper[4904]: I0223 10:32:20.079555 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t45h7" event={"ID":"9265564f-d316-4af9-aee7-05fef3748cc6","Type":"ContainerDied","Data":"a141285d63200feeb1857f5037044273b9c28a09e8d870ed1bfa7c8c14662b1e"} Feb 23 10:32:20 crc kubenswrapper[4904]: I0223 10:32:20.079592 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t45h7" event={"ID":"9265564f-d316-4af9-aee7-05fef3748cc6","Type":"ContainerStarted","Data":"e4559b24913bcb0dd21f89347aee2965cbcfb17ac0ef23e0c81cdf4f50894215"} Feb 23 10:32:20 crc kubenswrapper[4904]: I0223 10:32:20.083369 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 10:32:21 crc kubenswrapper[4904]: I0223 10:32:21.093699 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t45h7" event={"ID":"9265564f-d316-4af9-aee7-05fef3748cc6","Type":"ContainerStarted","Data":"48912bc20b54c8ac6bb7108b95511427c8e5d266a14cc8b4d18062b86c0eca47"} Feb 23 10:32:24 crc kubenswrapper[4904]: I0223 10:32:24.305794 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nvmcq"] Feb 23 10:32:24 crc kubenswrapper[4904]: I0223 10:32:24.309617 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nvmcq" Feb 23 10:32:24 crc kubenswrapper[4904]: I0223 10:32:24.345850 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nvmcq"] Feb 23 10:32:24 crc kubenswrapper[4904]: I0223 10:32:24.375014 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b31b3b7-65d8-49b8-95c2-19be5fb87a24-catalog-content\") pod \"community-operators-nvmcq\" (UID: \"2b31b3b7-65d8-49b8-95c2-19be5fb87a24\") " pod="openshift-marketplace/community-operators-nvmcq" Feb 23 10:32:24 crc kubenswrapper[4904]: I0223 10:32:24.375359 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b31b3b7-65d8-49b8-95c2-19be5fb87a24-utilities\") pod \"community-operators-nvmcq\" (UID: \"2b31b3b7-65d8-49b8-95c2-19be5fb87a24\") " pod="openshift-marketplace/community-operators-nvmcq" Feb 23 10:32:24 crc kubenswrapper[4904]: I0223 10:32:24.375466 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj8fj\" (UniqueName: \"kubernetes.io/projected/2b31b3b7-65d8-49b8-95c2-19be5fb87a24-kube-api-access-bj8fj\") pod \"community-operators-nvmcq\" (UID: \"2b31b3b7-65d8-49b8-95c2-19be5fb87a24\") " pod="openshift-marketplace/community-operators-nvmcq" Feb 23 10:32:24 crc kubenswrapper[4904]: I0223 10:32:24.476738 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b31b3b7-65d8-49b8-95c2-19be5fb87a24-utilities\") pod \"community-operators-nvmcq\" (UID: \"2b31b3b7-65d8-49b8-95c2-19be5fb87a24\") " pod="openshift-marketplace/community-operators-nvmcq" Feb 23 10:32:24 crc kubenswrapper[4904]: I0223 10:32:24.476837 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj8fj\" (UniqueName: \"kubernetes.io/projected/2b31b3b7-65d8-49b8-95c2-19be5fb87a24-kube-api-access-bj8fj\") pod \"community-operators-nvmcq\" (UID: \"2b31b3b7-65d8-49b8-95c2-19be5fb87a24\") " pod="openshift-marketplace/community-operators-nvmcq" Feb 23 10:32:24 crc kubenswrapper[4904]: I0223 10:32:24.476910 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b31b3b7-65d8-49b8-95c2-19be5fb87a24-catalog-content\") pod \"community-operators-nvmcq\" (UID: \"2b31b3b7-65d8-49b8-95c2-19be5fb87a24\") " pod="openshift-marketplace/community-operators-nvmcq" Feb 23 10:32:24 crc kubenswrapper[4904]: I0223 10:32:24.477227 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b31b3b7-65d8-49b8-95c2-19be5fb87a24-utilities\") pod \"community-operators-nvmcq\" (UID: \"2b31b3b7-65d8-49b8-95c2-19be5fb87a24\") " pod="openshift-marketplace/community-operators-nvmcq" Feb 23 10:32:24 crc kubenswrapper[4904]: I0223 10:32:24.477272 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b31b3b7-65d8-49b8-95c2-19be5fb87a24-catalog-content\") pod \"community-operators-nvmcq\" (UID: \"2b31b3b7-65d8-49b8-95c2-19be5fb87a24\") " pod="openshift-marketplace/community-operators-nvmcq" Feb 23 10:32:24 crc kubenswrapper[4904]: I0223 10:32:24.499150 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj8fj\" (UniqueName: \"kubernetes.io/projected/2b31b3b7-65d8-49b8-95c2-19be5fb87a24-kube-api-access-bj8fj\") pod \"community-operators-nvmcq\" (UID: \"2b31b3b7-65d8-49b8-95c2-19be5fb87a24\") " pod="openshift-marketplace/community-operators-nvmcq" Feb 23 10:32:24 crc kubenswrapper[4904]: I0223 10:32:24.660282 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nvmcq" Feb 23 10:32:25 crc kubenswrapper[4904]: I0223 10:32:25.159861 4904 generic.go:334] "Generic (PLEG): container finished" podID="9265564f-d316-4af9-aee7-05fef3748cc6" containerID="48912bc20b54c8ac6bb7108b95511427c8e5d266a14cc8b4d18062b86c0eca47" exitCode=0 Feb 23 10:32:25 crc kubenswrapper[4904]: I0223 10:32:25.160126 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t45h7" event={"ID":"9265564f-d316-4af9-aee7-05fef3748cc6","Type":"ContainerDied","Data":"48912bc20b54c8ac6bb7108b95511427c8e5d266a14cc8b4d18062b86c0eca47"} Feb 23 10:32:25 crc kubenswrapper[4904]: I0223 10:32:25.228449 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nvmcq"] Feb 23 10:32:26 crc kubenswrapper[4904]: I0223 10:32:26.170588 4904 generic.go:334] "Generic (PLEG): container finished" podID="2b31b3b7-65d8-49b8-95c2-19be5fb87a24" containerID="2fe41deb05523804fa5e5578f426cba6b747e802a05fa964f961cc67982cf25b" exitCode=0 Feb 23 10:32:26 crc kubenswrapper[4904]: I0223 10:32:26.170651 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nvmcq" event={"ID":"2b31b3b7-65d8-49b8-95c2-19be5fb87a24","Type":"ContainerDied","Data":"2fe41deb05523804fa5e5578f426cba6b747e802a05fa964f961cc67982cf25b"} Feb 23 10:32:26 crc kubenswrapper[4904]: I0223 10:32:26.171372 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nvmcq" event={"ID":"2b31b3b7-65d8-49b8-95c2-19be5fb87a24","Type":"ContainerStarted","Data":"a7dc671552ef8d84712600b0fee7cdad3b1ee53d7cf1a987007ba3dda2b76612"} Feb 23 10:32:26 crc kubenswrapper[4904]: I0223 10:32:26.178797 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t45h7" event={"ID":"9265564f-d316-4af9-aee7-05fef3748cc6","Type":"ContainerStarted","Data":"d40549e79edbaddf394103a602a01006468985b322b147771907f146fe0ad624"} Feb 23 10:32:26 crc kubenswrapper[4904]: I0223 10:32:26.219948 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t45h7" podStartSLOduration=2.721845555 podStartE2EDuration="8.219929505s" podCreationTimestamp="2026-02-23 10:32:18 +0000 UTC" firstStartedPulling="2026-02-23 10:32:20.082597128 +0000 UTC m=+1573.502970641" lastFinishedPulling="2026-02-23 10:32:25.580681058 +0000 UTC m=+1579.001054591" observedRunningTime="2026-02-23 10:32:26.216202589 +0000 UTC m=+1579.636576112" watchObservedRunningTime="2026-02-23 10:32:26.219929505 +0000 UTC m=+1579.640303038" Feb 23 10:32:27 crc kubenswrapper[4904]: I0223 10:32:27.200247 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nvmcq" event={"ID":"2b31b3b7-65d8-49b8-95c2-19be5fb87a24","Type":"ContainerStarted","Data":"936fc3b6fecb6d4465406a2655b30351e8e6d27ccda1c64e7845bccf8f9541dd"} Feb 23 10:32:28 crc kubenswrapper[4904]: E0223 10:32:28.585211 4904 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b31b3b7_65d8_49b8_95c2_19be5fb87a24.slice/crio-936fc3b6fecb6d4465406a2655b30351e8e6d27ccda1c64e7845bccf8f9541dd.scope\": RecentStats: unable to find data in memory cache]" Feb 23 10:32:29 crc kubenswrapper[4904]: I0223 10:32:29.228552 4904 generic.go:334] "Generic (PLEG): container finished" podID="2b31b3b7-65d8-49b8-95c2-19be5fb87a24" containerID="936fc3b6fecb6d4465406a2655b30351e8e6d27ccda1c64e7845bccf8f9541dd" exitCode=0 Feb 23 10:32:29 crc kubenswrapper[4904]: I0223 10:32:29.228630 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nvmcq" event={"ID":"2b31b3b7-65d8-49b8-95c2-19be5fb87a24","Type":"ContainerDied","Data":"936fc3b6fecb6d4465406a2655b30351e8e6d27ccda1c64e7845bccf8f9541dd"} Feb 23 10:32:29 crc kubenswrapper[4904]: I0223 10:32:29.247182 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t45h7" Feb 23 10:32:29 crc kubenswrapper[4904]: I0223 10:32:29.247276 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t45h7" Feb 23 10:32:30 crc kubenswrapper[4904]: I0223 10:32:30.247049 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nvmcq" event={"ID":"2b31b3b7-65d8-49b8-95c2-19be5fb87a24","Type":"ContainerStarted","Data":"222924434f0b816a35ba77439fa3dbc8aaebec2e7bec192a03e09b68717911a5"} Feb 23 10:32:30 crc kubenswrapper[4904]: I0223 10:32:30.279201 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nvmcq" podStartSLOduration=2.84897537 podStartE2EDuration="6.27917038s" podCreationTimestamp="2026-02-23 10:32:24 +0000 UTC" firstStartedPulling="2026-02-23 10:32:26.1744541 +0000 UTC m=+1579.594827643" lastFinishedPulling="2026-02-23 10:32:29.60464913 +0000 UTC m=+1583.025022653" observedRunningTime="2026-02-23 10:32:30.269528326 +0000 UTC m=+1583.689901859" watchObservedRunningTime="2026-02-23 10:32:30.27917038 +0000 UTC m=+1583.699543923" Feb 23 10:32:30 crc kubenswrapper[4904]: I0223 10:32:30.331230 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t45h7" podUID="9265564f-d316-4af9-aee7-05fef3748cc6" containerName="registry-server" probeResult="failure" output=< Feb 23 10:32:30 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Feb 23 10:32:30 crc kubenswrapper[4904]: > Feb 23 10:32:34 crc kubenswrapper[4904]: I0223 10:32:34.661195 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nvmcq" Feb 23 10:32:34 crc kubenswrapper[4904]: I0223 10:32:34.662606 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nvmcq" Feb 23 10:32:34 crc kubenswrapper[4904]: I0223 10:32:34.722452 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nvmcq" Feb 23 10:32:35 crc kubenswrapper[4904]: I0223 10:32:35.391196 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nvmcq" Feb 23 10:32:35 crc kubenswrapper[4904]: I0223 10:32:35.463657 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nvmcq"] Feb 23 10:32:37 crc kubenswrapper[4904]: I0223 10:32:37.344191 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nvmcq" podUID="2b31b3b7-65d8-49b8-95c2-19be5fb87a24" containerName="registry-server" containerID="cri-o://222924434f0b816a35ba77439fa3dbc8aaebec2e7bec192a03e09b68717911a5" gracePeriod=2 Feb 23 10:32:37 crc kubenswrapper[4904]: I0223 10:32:37.857295 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nvmcq" Feb 23 10:32:37 crc kubenswrapper[4904]: I0223 10:32:37.939228 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b31b3b7-65d8-49b8-95c2-19be5fb87a24-utilities\") pod \"2b31b3b7-65d8-49b8-95c2-19be5fb87a24\" (UID: \"2b31b3b7-65d8-49b8-95c2-19be5fb87a24\") " Feb 23 10:32:37 crc kubenswrapper[4904]: I0223 10:32:37.939369 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b31b3b7-65d8-49b8-95c2-19be5fb87a24-catalog-content\") pod \"2b31b3b7-65d8-49b8-95c2-19be5fb87a24\" (UID: \"2b31b3b7-65d8-49b8-95c2-19be5fb87a24\") " Feb 23 10:32:37 crc kubenswrapper[4904]: I0223 10:32:37.939432 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj8fj\" (UniqueName: \"kubernetes.io/projected/2b31b3b7-65d8-49b8-95c2-19be5fb87a24-kube-api-access-bj8fj\") pod \"2b31b3b7-65d8-49b8-95c2-19be5fb87a24\" (UID: \"2b31b3b7-65d8-49b8-95c2-19be5fb87a24\") " Feb 23 10:32:37 crc kubenswrapper[4904]: I0223 10:32:37.945260 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b31b3b7-65d8-49b8-95c2-19be5fb87a24-kube-api-access-bj8fj" (OuterVolumeSpecName: "kube-api-access-bj8fj") pod "2b31b3b7-65d8-49b8-95c2-19be5fb87a24" (UID: "2b31b3b7-65d8-49b8-95c2-19be5fb87a24"). InnerVolumeSpecName "kube-api-access-bj8fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:32:37 crc kubenswrapper[4904]: I0223 10:32:37.945744 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b31b3b7-65d8-49b8-95c2-19be5fb87a24-utilities" (OuterVolumeSpecName: "utilities") pod "2b31b3b7-65d8-49b8-95c2-19be5fb87a24" (UID: "2b31b3b7-65d8-49b8-95c2-19be5fb87a24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:32:38 crc kubenswrapper[4904]: I0223 10:32:38.009132 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b31b3b7-65d8-49b8-95c2-19be5fb87a24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b31b3b7-65d8-49b8-95c2-19be5fb87a24" (UID: "2b31b3b7-65d8-49b8-95c2-19be5fb87a24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:32:38 crc kubenswrapper[4904]: I0223 10:32:38.042364 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b31b3b7-65d8-49b8-95c2-19be5fb87a24-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:32:38 crc kubenswrapper[4904]: I0223 10:32:38.042406 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b31b3b7-65d8-49b8-95c2-19be5fb87a24-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:32:38 crc kubenswrapper[4904]: I0223 10:32:38.042418 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj8fj\" (UniqueName: \"kubernetes.io/projected/2b31b3b7-65d8-49b8-95c2-19be5fb87a24-kube-api-access-bj8fj\") on node \"crc\" DevicePath \"\"" Feb 23 10:32:38 crc kubenswrapper[4904]: I0223 10:32:38.359436 4904 generic.go:334] "Generic (PLEG): container finished" podID="2b31b3b7-65d8-49b8-95c2-19be5fb87a24" containerID="222924434f0b816a35ba77439fa3dbc8aaebec2e7bec192a03e09b68717911a5" exitCode=0 Feb 23 10:32:38 crc kubenswrapper[4904]: I0223 10:32:38.359512 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nvmcq" event={"ID":"2b31b3b7-65d8-49b8-95c2-19be5fb87a24","Type":"ContainerDied","Data":"222924434f0b816a35ba77439fa3dbc8aaebec2e7bec192a03e09b68717911a5"} Feb 23 10:32:38 crc kubenswrapper[4904]: I0223 10:32:38.359544 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nvmcq" Feb 23 10:32:38 crc kubenswrapper[4904]: I0223 10:32:38.359573 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nvmcq" event={"ID":"2b31b3b7-65d8-49b8-95c2-19be5fb87a24","Type":"ContainerDied","Data":"a7dc671552ef8d84712600b0fee7cdad3b1ee53d7cf1a987007ba3dda2b76612"} Feb 23 10:32:38 crc kubenswrapper[4904]: I0223 10:32:38.359601 4904 scope.go:117] "RemoveContainer" containerID="222924434f0b816a35ba77439fa3dbc8aaebec2e7bec192a03e09b68717911a5" Feb 23 10:32:38 crc kubenswrapper[4904]: I0223 10:32:38.389313 4904 scope.go:117] "RemoveContainer" containerID="936fc3b6fecb6d4465406a2655b30351e8e6d27ccda1c64e7845bccf8f9541dd" Feb 23 10:32:38 crc kubenswrapper[4904]: I0223 10:32:38.400928 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nvmcq"] Feb 23 10:32:38 crc kubenswrapper[4904]: I0223 10:32:38.417705 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nvmcq"] Feb 23 10:32:38 crc kubenswrapper[4904]: I0223 10:32:38.419637 4904 scope.go:117] "RemoveContainer" containerID="2fe41deb05523804fa5e5578f426cba6b747e802a05fa964f961cc67982cf25b" Feb 23 10:32:38 crc kubenswrapper[4904]: I0223 10:32:38.464204 4904 scope.go:117] "RemoveContainer" containerID="222924434f0b816a35ba77439fa3dbc8aaebec2e7bec192a03e09b68717911a5" Feb 23 10:32:38 crc kubenswrapper[4904]: E0223 10:32:38.464615 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"222924434f0b816a35ba77439fa3dbc8aaebec2e7bec192a03e09b68717911a5\": container with ID starting with 222924434f0b816a35ba77439fa3dbc8aaebec2e7bec192a03e09b68717911a5 not found: ID does not exist" containerID="222924434f0b816a35ba77439fa3dbc8aaebec2e7bec192a03e09b68717911a5" Feb 23 10:32:38 crc kubenswrapper[4904]: I0223 10:32:38.464672 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"222924434f0b816a35ba77439fa3dbc8aaebec2e7bec192a03e09b68717911a5"} err="failed to get container status \"222924434f0b816a35ba77439fa3dbc8aaebec2e7bec192a03e09b68717911a5\": rpc error: code = NotFound desc = could not find container \"222924434f0b816a35ba77439fa3dbc8aaebec2e7bec192a03e09b68717911a5\": container with ID starting with 222924434f0b816a35ba77439fa3dbc8aaebec2e7bec192a03e09b68717911a5 not found: ID does not exist" Feb 23 10:32:38 crc kubenswrapper[4904]: I0223 10:32:38.464722 4904 scope.go:117] "RemoveContainer" containerID="936fc3b6fecb6d4465406a2655b30351e8e6d27ccda1c64e7845bccf8f9541dd" Feb 23 10:32:38 crc kubenswrapper[4904]: E0223 10:32:38.465168 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"936fc3b6fecb6d4465406a2655b30351e8e6d27ccda1c64e7845bccf8f9541dd\": container with ID starting with 936fc3b6fecb6d4465406a2655b30351e8e6d27ccda1c64e7845bccf8f9541dd not found: ID does not exist" containerID="936fc3b6fecb6d4465406a2655b30351e8e6d27ccda1c64e7845bccf8f9541dd" Feb 23 10:32:38 crc kubenswrapper[4904]: I0223 10:32:38.465201 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"936fc3b6fecb6d4465406a2655b30351e8e6d27ccda1c64e7845bccf8f9541dd"} err="failed to get container status \"936fc3b6fecb6d4465406a2655b30351e8e6d27ccda1c64e7845bccf8f9541dd\": rpc error: code = NotFound desc = could not find container \"936fc3b6fecb6d4465406a2655b30351e8e6d27ccda1c64e7845bccf8f9541dd\": container with ID starting with 936fc3b6fecb6d4465406a2655b30351e8e6d27ccda1c64e7845bccf8f9541dd not found: ID does not exist" Feb 23 10:32:38 crc kubenswrapper[4904]: I0223 10:32:38.465223 4904 scope.go:117] "RemoveContainer" containerID="2fe41deb05523804fa5e5578f426cba6b747e802a05fa964f961cc67982cf25b" Feb 23 10:32:38 crc kubenswrapper[4904]: E0223 10:32:38.465472 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fe41deb05523804fa5e5578f426cba6b747e802a05fa964f961cc67982cf25b\": container with ID starting with 2fe41deb05523804fa5e5578f426cba6b747e802a05fa964f961cc67982cf25b not found: ID does not exist" containerID="2fe41deb05523804fa5e5578f426cba6b747e802a05fa964f961cc67982cf25b" Feb 23 10:32:38 crc kubenswrapper[4904]: I0223 10:32:38.465489 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fe41deb05523804fa5e5578f426cba6b747e802a05fa964f961cc67982cf25b"} err="failed to get container status \"2fe41deb05523804fa5e5578f426cba6b747e802a05fa964f961cc67982cf25b\": rpc error: code = NotFound desc = could not find container \"2fe41deb05523804fa5e5578f426cba6b747e802a05fa964f961cc67982cf25b\": container with ID starting with 2fe41deb05523804fa5e5578f426cba6b747e802a05fa964f961cc67982cf25b not found: ID does not exist" Feb 23 10:32:39 crc kubenswrapper[4904]: I0223 10:32:39.284286 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b31b3b7-65d8-49b8-95c2-19be5fb87a24" path="/var/lib/kubelet/pods/2b31b3b7-65d8-49b8-95c2-19be5fb87a24/volumes" Feb 23 10:32:40 crc kubenswrapper[4904]: I0223 10:32:40.326285 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t45h7" podUID="9265564f-d316-4af9-aee7-05fef3748cc6" containerName="registry-server" probeResult="failure" output=< Feb 23 10:32:40 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Feb 23 10:32:40 crc kubenswrapper[4904]: > Feb 23 10:32:47 crc kubenswrapper[4904]: I0223 10:32:47.398409 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:32:47 crc kubenswrapper[4904]: I0223 10:32:47.399115 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:32:47 crc kubenswrapper[4904]: I0223 10:32:47.399187 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:32:47 crc kubenswrapper[4904]: I0223 10:32:47.400364 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112"} pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 10:32:47 crc kubenswrapper[4904]: I0223 10:32:47.400467 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" containerID="cri-o://73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" gracePeriod=600 Feb 23 10:32:47 crc kubenswrapper[4904]: E0223 10:32:47.547450 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:32:48 crc kubenswrapper[4904]: I0223 10:32:48.484509 4904 generic.go:334] "Generic (PLEG): container finished" podID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" exitCode=0 Feb 23 10:32:48 crc kubenswrapper[4904]: I0223 10:32:48.484580 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerDied","Data":"73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112"} Feb 23 10:32:48 crc kubenswrapper[4904]: I0223 10:32:48.486023 4904 scope.go:117] "RemoveContainer" containerID="239d9f69c1c753f6e98d8261e34261e4ea3e4b4d4d57f0a5fcbe49086812f15b" Feb 23 10:32:48 crc kubenswrapper[4904]: I0223 10:32:48.486854 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:32:48 crc kubenswrapper[4904]: E0223 10:32:48.487261 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:32:49 crc kubenswrapper[4904]: I0223 10:32:49.299379 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t45h7" Feb 23 10:32:49 crc kubenswrapper[4904]: I0223 10:32:49.348407 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t45h7" Feb 23 10:32:50 crc kubenswrapper[4904]: I0223 10:32:50.102779 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t45h7"] Feb 23 10:32:50 crc kubenswrapper[4904]: I0223 10:32:50.510362 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t45h7" podUID="9265564f-d316-4af9-aee7-05fef3748cc6" containerName="registry-server" containerID="cri-o://d40549e79edbaddf394103a602a01006468985b322b147771907f146fe0ad624" gracePeriod=2 Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.081565 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t45h7" Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.158448 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9265564f-d316-4af9-aee7-05fef3748cc6-utilities\") pod \"9265564f-d316-4af9-aee7-05fef3748cc6\" (UID: \"9265564f-d316-4af9-aee7-05fef3748cc6\") " Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.158614 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9265564f-d316-4af9-aee7-05fef3748cc6-catalog-content\") pod \"9265564f-d316-4af9-aee7-05fef3748cc6\" (UID: \"9265564f-d316-4af9-aee7-05fef3748cc6\") " Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.158670 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pxhm\" (UniqueName: \"kubernetes.io/projected/9265564f-d316-4af9-aee7-05fef3748cc6-kube-api-access-5pxhm\") pod \"9265564f-d316-4af9-aee7-05fef3748cc6\" (UID: \"9265564f-d316-4af9-aee7-05fef3748cc6\") " Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.159570 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9265564f-d316-4af9-aee7-05fef3748cc6-utilities" (OuterVolumeSpecName: "utilities") pod "9265564f-d316-4af9-aee7-05fef3748cc6" (UID: "9265564f-d316-4af9-aee7-05fef3748cc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.165988 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9265564f-d316-4af9-aee7-05fef3748cc6-kube-api-access-5pxhm" (OuterVolumeSpecName: "kube-api-access-5pxhm") pod "9265564f-d316-4af9-aee7-05fef3748cc6" (UID: "9265564f-d316-4af9-aee7-05fef3748cc6"). InnerVolumeSpecName "kube-api-access-5pxhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.261178 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9265564f-d316-4af9-aee7-05fef3748cc6-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.261213 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pxhm\" (UniqueName: \"kubernetes.io/projected/9265564f-d316-4af9-aee7-05fef3748cc6-kube-api-access-5pxhm\") on node \"crc\" DevicePath \"\"" Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.317965 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9265564f-d316-4af9-aee7-05fef3748cc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9265564f-d316-4af9-aee7-05fef3748cc6" (UID: "9265564f-d316-4af9-aee7-05fef3748cc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.363208 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9265564f-d316-4af9-aee7-05fef3748cc6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.524445 4904 generic.go:334] "Generic (PLEG): container finished" podID="9265564f-d316-4af9-aee7-05fef3748cc6" containerID="d40549e79edbaddf394103a602a01006468985b322b147771907f146fe0ad624" exitCode=0 Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.524516 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t45h7" event={"ID":"9265564f-d316-4af9-aee7-05fef3748cc6","Type":"ContainerDied","Data":"d40549e79edbaddf394103a602a01006468985b322b147771907f146fe0ad624"} Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.524572 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t45h7" event={"ID":"9265564f-d316-4af9-aee7-05fef3748cc6","Type":"ContainerDied","Data":"e4559b24913bcb0dd21f89347aee2965cbcfb17ac0ef23e0c81cdf4f50894215"} Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.524611 4904 scope.go:117] "RemoveContainer" containerID="d40549e79edbaddf394103a602a01006468985b322b147771907f146fe0ad624" Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.524527 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t45h7" Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.557488 4904 scope.go:117] "RemoveContainer" containerID="48912bc20b54c8ac6bb7108b95511427c8e5d266a14cc8b4d18062b86c0eca47" Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.580223 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t45h7"] Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.596871 4904 scope.go:117] "RemoveContainer" containerID="a141285d63200feeb1857f5037044273b9c28a09e8d870ed1bfa7c8c14662b1e" Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.597148 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t45h7"] Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.678406 4904 scope.go:117] "RemoveContainer" containerID="d40549e79edbaddf394103a602a01006468985b322b147771907f146fe0ad624" Feb 23 10:32:51 crc kubenswrapper[4904]: E0223 10:32:51.679289 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d40549e79edbaddf394103a602a01006468985b322b147771907f146fe0ad624\": container with ID starting with d40549e79edbaddf394103a602a01006468985b322b147771907f146fe0ad624 not found: ID does not exist" containerID="d40549e79edbaddf394103a602a01006468985b322b147771907f146fe0ad624" Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.679366 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d40549e79edbaddf394103a602a01006468985b322b147771907f146fe0ad624"} err="failed to get container status \"d40549e79edbaddf394103a602a01006468985b322b147771907f146fe0ad624\": rpc error: code = NotFound desc = could not find container \"d40549e79edbaddf394103a602a01006468985b322b147771907f146fe0ad624\": container with ID starting with d40549e79edbaddf394103a602a01006468985b322b147771907f146fe0ad624 not found: ID does not exist" Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.679408 4904 scope.go:117] "RemoveContainer" containerID="48912bc20b54c8ac6bb7108b95511427c8e5d266a14cc8b4d18062b86c0eca47" Feb 23 10:32:51 crc kubenswrapper[4904]: E0223 10:32:51.679832 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48912bc20b54c8ac6bb7108b95511427c8e5d266a14cc8b4d18062b86c0eca47\": container with ID starting with 48912bc20b54c8ac6bb7108b95511427c8e5d266a14cc8b4d18062b86c0eca47 not found: ID does not exist" containerID="48912bc20b54c8ac6bb7108b95511427c8e5d266a14cc8b4d18062b86c0eca47" Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.679887 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48912bc20b54c8ac6bb7108b95511427c8e5d266a14cc8b4d18062b86c0eca47"} err="failed to get container status \"48912bc20b54c8ac6bb7108b95511427c8e5d266a14cc8b4d18062b86c0eca47\": rpc error: code = NotFound desc = could not find container \"48912bc20b54c8ac6bb7108b95511427c8e5d266a14cc8b4d18062b86c0eca47\": container with ID starting with 48912bc20b54c8ac6bb7108b95511427c8e5d266a14cc8b4d18062b86c0eca47 not found: ID does not exist" Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.679922 4904 scope.go:117] "RemoveContainer" containerID="a141285d63200feeb1857f5037044273b9c28a09e8d870ed1bfa7c8c14662b1e" Feb 23 10:32:51 crc kubenswrapper[4904]: E0223 10:32:51.680273 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a141285d63200feeb1857f5037044273b9c28a09e8d870ed1bfa7c8c14662b1e\": container with ID starting with a141285d63200feeb1857f5037044273b9c28a09e8d870ed1bfa7c8c14662b1e not found: ID does not exist" containerID="a141285d63200feeb1857f5037044273b9c28a09e8d870ed1bfa7c8c14662b1e" Feb 23 10:32:51 crc kubenswrapper[4904]: I0223 10:32:51.680316 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a141285d63200feeb1857f5037044273b9c28a09e8d870ed1bfa7c8c14662b1e"} err="failed to get container status \"a141285d63200feeb1857f5037044273b9c28a09e8d870ed1bfa7c8c14662b1e\": rpc error: code = NotFound desc = could not find container \"a141285d63200feeb1857f5037044273b9c28a09e8d870ed1bfa7c8c14662b1e\": container with ID starting with a141285d63200feeb1857f5037044273b9c28a09e8d870ed1bfa7c8c14662b1e not found: ID does not exist" Feb 23 10:32:53 crc kubenswrapper[4904]: I0223 10:32:53.277950 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9265564f-d316-4af9-aee7-05fef3748cc6" path="/var/lib/kubelet/pods/9265564f-d316-4af9-aee7-05fef3748cc6/volumes" Feb 23 10:33:01 crc kubenswrapper[4904]: I0223 10:33:01.255369 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:33:01 crc kubenswrapper[4904]: E0223 10:33:01.256292 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:33:06 crc kubenswrapper[4904]: I0223 10:33:06.333302 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pvqn6"] Feb 23 10:33:06 crc kubenswrapper[4904]: E0223 10:33:06.334579 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b31b3b7-65d8-49b8-95c2-19be5fb87a24" containerName="registry-server" Feb 23 10:33:06 crc kubenswrapper[4904]: I0223 10:33:06.334603 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b31b3b7-65d8-49b8-95c2-19be5fb87a24" containerName="registry-server" Feb 23 10:33:06 crc kubenswrapper[4904]: E0223 10:33:06.334653 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b31b3b7-65d8-49b8-95c2-19be5fb87a24" containerName="extract-content" Feb 23 10:33:06 crc kubenswrapper[4904]: I0223 10:33:06.334666 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b31b3b7-65d8-49b8-95c2-19be5fb87a24" containerName="extract-content" Feb 23 10:33:06 crc kubenswrapper[4904]: E0223 10:33:06.334687 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9265564f-d316-4af9-aee7-05fef3748cc6" containerName="registry-server" Feb 23 10:33:06 crc kubenswrapper[4904]: I0223 10:33:06.334701 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9265564f-d316-4af9-aee7-05fef3748cc6" containerName="registry-server" Feb 23 10:33:06 crc kubenswrapper[4904]: E0223 10:33:06.334754 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9265564f-d316-4af9-aee7-05fef3748cc6" containerName="extract-utilities" Feb 23 10:33:06 crc kubenswrapper[4904]: I0223 10:33:06.334768 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9265564f-d316-4af9-aee7-05fef3748cc6" containerName="extract-utilities" Feb 23 10:33:06 crc kubenswrapper[4904]: E0223 10:33:06.334791 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b31b3b7-65d8-49b8-95c2-19be5fb87a24" containerName="extract-utilities" Feb 23 10:33:06 crc kubenswrapper[4904]: I0223 10:33:06.334803 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b31b3b7-65d8-49b8-95c2-19be5fb87a24" containerName="extract-utilities" Feb 23 10:33:06 crc kubenswrapper[4904]: E0223 10:33:06.334830 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9265564f-d316-4af9-aee7-05fef3748cc6" containerName="extract-content" Feb 23 10:33:06 crc kubenswrapper[4904]: I0223 10:33:06.334843 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9265564f-d316-4af9-aee7-05fef3748cc6" containerName="extract-content" Feb 23 10:33:06 crc kubenswrapper[4904]: I0223 10:33:06.335242 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="9265564f-d316-4af9-aee7-05fef3748cc6" containerName="registry-server" Feb 23 10:33:06 crc kubenswrapper[4904]: I0223 10:33:06.335286 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b31b3b7-65d8-49b8-95c2-19be5fb87a24" containerName="registry-server" Feb 23 10:33:06 crc kubenswrapper[4904]: I0223 10:33:06.338086 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvqn6" Feb 23 10:33:06 crc kubenswrapper[4904]: I0223 10:33:06.356690 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvqn6"] Feb 23 10:33:06 crc kubenswrapper[4904]: I0223 10:33:06.533047 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5232bd37-33fa-4c62-bde3-b9617e9f2f5e-utilities\") pod \"redhat-marketplace-pvqn6\" (UID: \"5232bd37-33fa-4c62-bde3-b9617e9f2f5e\") " pod="openshift-marketplace/redhat-marketplace-pvqn6" Feb 23 10:33:06 crc kubenswrapper[4904]: I0223 10:33:06.533258 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttqt9\" (UniqueName: \"kubernetes.io/projected/5232bd37-33fa-4c62-bde3-b9617e9f2f5e-kube-api-access-ttqt9\") pod \"redhat-marketplace-pvqn6\" (UID: \"5232bd37-33fa-4c62-bde3-b9617e9f2f5e\") " pod="openshift-marketplace/redhat-marketplace-pvqn6" Feb 23 10:33:06 crc kubenswrapper[4904]: I0223 10:33:06.533299 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5232bd37-33fa-4c62-bde3-b9617e9f2f5e-catalog-content\") pod \"redhat-marketplace-pvqn6\" (UID: \"5232bd37-33fa-4c62-bde3-b9617e9f2f5e\") " pod="openshift-marketplace/redhat-marketplace-pvqn6" Feb 23 10:33:06 crc kubenswrapper[4904]: I0223 10:33:06.635281 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5232bd37-33fa-4c62-bde3-b9617e9f2f5e-utilities\") pod \"redhat-marketplace-pvqn6\" (UID: \"5232bd37-33fa-4c62-bde3-b9617e9f2f5e\") " pod="openshift-marketplace/redhat-marketplace-pvqn6" Feb 23 10:33:06 crc kubenswrapper[4904]: I0223 10:33:06.635454 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttqt9\" (UniqueName: \"kubernetes.io/projected/5232bd37-33fa-4c62-bde3-b9617e9f2f5e-kube-api-access-ttqt9\") pod \"redhat-marketplace-pvqn6\" (UID: \"5232bd37-33fa-4c62-bde3-b9617e9f2f5e\") " pod="openshift-marketplace/redhat-marketplace-pvqn6" Feb 23 10:33:06 crc kubenswrapper[4904]: I0223 10:33:06.635489 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5232bd37-33fa-4c62-bde3-b9617e9f2f5e-catalog-content\") pod \"redhat-marketplace-pvqn6\" (UID: \"5232bd37-33fa-4c62-bde3-b9617e9f2f5e\") " pod="openshift-marketplace/redhat-marketplace-pvqn6" Feb 23 10:33:06 crc kubenswrapper[4904]: I0223 10:33:06.636081 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5232bd37-33fa-4c62-bde3-b9617e9f2f5e-utilities\") pod \"redhat-marketplace-pvqn6\" (UID: \"5232bd37-33fa-4c62-bde3-b9617e9f2f5e\") " pod="openshift-marketplace/redhat-marketplace-pvqn6" Feb 23 10:33:06 crc kubenswrapper[4904]: I0223 10:33:06.636103 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5232bd37-33fa-4c62-bde3-b9617e9f2f5e-catalog-content\") pod \"redhat-marketplace-pvqn6\" (UID: \"5232bd37-33fa-4c62-bde3-b9617e9f2f5e\") " pod="openshift-marketplace/redhat-marketplace-pvqn6" Feb 23 10:33:06 crc kubenswrapper[4904]: I0223 10:33:06.664607 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttqt9\" (UniqueName: \"kubernetes.io/projected/5232bd37-33fa-4c62-bde3-b9617e9f2f5e-kube-api-access-ttqt9\") pod \"redhat-marketplace-pvqn6\" (UID: \"5232bd37-33fa-4c62-bde3-b9617e9f2f5e\") " pod="openshift-marketplace/redhat-marketplace-pvqn6" Feb 23 10:33:06 crc kubenswrapper[4904]: I0223 10:33:06.682211 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvqn6" Feb 23 10:33:07 crc kubenswrapper[4904]: I0223 10:33:07.194017 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvqn6"] Feb 23 10:33:07 crc kubenswrapper[4904]: W0223 10:33:07.199251 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5232bd37_33fa_4c62_bde3_b9617e9f2f5e.slice/crio-c54f534ae1d8501a634d7176333fdfb2a00e6278773412b383b96d04786fef5a WatchSource:0}: Error finding container c54f534ae1d8501a634d7176333fdfb2a00e6278773412b383b96d04786fef5a: Status 404 returned error can't find the container with id c54f534ae1d8501a634d7176333fdfb2a00e6278773412b383b96d04786fef5a Feb 23 10:33:07 crc kubenswrapper[4904]: I0223 10:33:07.757887 4904 generic.go:334] "Generic (PLEG): container finished" podID="5232bd37-33fa-4c62-bde3-b9617e9f2f5e" containerID="1bc2126d13bb793c63c0b233868d128da56d78ac9ada90d46be75698bbf5bf48" exitCode=0 Feb 23 10:33:07 crc kubenswrapper[4904]: I0223 10:33:07.758003 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvqn6" event={"ID":"5232bd37-33fa-4c62-bde3-b9617e9f2f5e","Type":"ContainerDied","Data":"1bc2126d13bb793c63c0b233868d128da56d78ac9ada90d46be75698bbf5bf48"} Feb 23 10:33:07 crc kubenswrapper[4904]: I0223 10:33:07.758351 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvqn6" event={"ID":"5232bd37-33fa-4c62-bde3-b9617e9f2f5e","Type":"ContainerStarted","Data":"c54f534ae1d8501a634d7176333fdfb2a00e6278773412b383b96d04786fef5a"} Feb 23 10:33:09 crc kubenswrapper[4904]: I0223 10:33:09.781451 4904 generic.go:334] "Generic (PLEG): container finished" podID="5232bd37-33fa-4c62-bde3-b9617e9f2f5e" containerID="ca9377172c18c50d603675b4ac79b954f3f89572d97101d9ff7c58be3b4a7782" exitCode=0 Feb 23 10:33:09 crc kubenswrapper[4904]: I0223 10:33:09.781507 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvqn6" event={"ID":"5232bd37-33fa-4c62-bde3-b9617e9f2f5e","Type":"ContainerDied","Data":"ca9377172c18c50d603675b4ac79b954f3f89572d97101d9ff7c58be3b4a7782"} Feb 23 10:33:10 crc kubenswrapper[4904]: I0223 10:33:10.800046 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvqn6" event={"ID":"5232bd37-33fa-4c62-bde3-b9617e9f2f5e","Type":"ContainerStarted","Data":"78fc035c78b950dc689294e7b717f861ff5d054a5aacb7d0740128b05ea13cda"} Feb 23 10:33:10 crc kubenswrapper[4904]: I0223 10:33:10.838374 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pvqn6" podStartSLOduration=2.402377158 podStartE2EDuration="4.838351968s" podCreationTimestamp="2026-02-23 10:33:06 +0000 UTC" firstStartedPulling="2026-02-23 10:33:07.762302598 +0000 UTC m=+1621.182676161" lastFinishedPulling="2026-02-23 10:33:10.198277458 +0000 UTC m=+1623.618650971" observedRunningTime="2026-02-23 10:33:10.827087337 +0000 UTC m=+1624.247460890" watchObservedRunningTime="2026-02-23 10:33:10.838351968 +0000 UTC m=+1624.258725491" Feb 23 10:33:12 crc kubenswrapper[4904]: I0223 10:33:12.255502 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:33:12 crc kubenswrapper[4904]: E0223 10:33:12.256173 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:33:13 crc kubenswrapper[4904]: I0223 10:33:13.567519 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6zhf2"] Feb 23 10:33:13 crc kubenswrapper[4904]: I0223 10:33:13.570661 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6zhf2" Feb 23 10:33:13 crc kubenswrapper[4904]: I0223 10:33:13.585658 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6zhf2"] Feb 23 10:33:13 crc kubenswrapper[4904]: I0223 10:33:13.712955 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a37140ad-e670-4e18-ba70-76ccc362de4d-catalog-content\") pod \"certified-operators-6zhf2\" (UID: \"a37140ad-e670-4e18-ba70-76ccc362de4d\") " pod="openshift-marketplace/certified-operators-6zhf2" Feb 23 10:33:13 crc kubenswrapper[4904]: I0223 10:33:13.713110 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a37140ad-e670-4e18-ba70-76ccc362de4d-utilities\") pod \"certified-operators-6zhf2\" (UID: \"a37140ad-e670-4e18-ba70-76ccc362de4d\") " pod="openshift-marketplace/certified-operators-6zhf2" Feb 23 10:33:13 crc kubenswrapper[4904]: I0223 10:33:13.713235 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvw45\" (UniqueName: \"kubernetes.io/projected/a37140ad-e670-4e18-ba70-76ccc362de4d-kube-api-access-lvw45\") pod \"certified-operators-6zhf2\" (UID: \"a37140ad-e670-4e18-ba70-76ccc362de4d\") " pod="openshift-marketplace/certified-operators-6zhf2" Feb 23 10:33:13 crc kubenswrapper[4904]: I0223 10:33:13.815142 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvw45\" (UniqueName: \"kubernetes.io/projected/a37140ad-e670-4e18-ba70-76ccc362de4d-kube-api-access-lvw45\") pod \"certified-operators-6zhf2\" (UID: \"a37140ad-e670-4e18-ba70-76ccc362de4d\") " pod="openshift-marketplace/certified-operators-6zhf2" Feb 23 10:33:13 crc kubenswrapper[4904]: I0223 10:33:13.815218 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a37140ad-e670-4e18-ba70-76ccc362de4d-catalog-content\") pod \"certified-operators-6zhf2\" (UID: \"a37140ad-e670-4e18-ba70-76ccc362de4d\") " pod="openshift-marketplace/certified-operators-6zhf2" Feb 23 10:33:13 crc kubenswrapper[4904]: I0223 10:33:13.815296 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a37140ad-e670-4e18-ba70-76ccc362de4d-utilities\") pod \"certified-operators-6zhf2\" (UID: \"a37140ad-e670-4e18-ba70-76ccc362de4d\") " pod="openshift-marketplace/certified-operators-6zhf2" Feb 23 10:33:13 crc kubenswrapper[4904]: I0223 10:33:13.815876 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a37140ad-e670-4e18-ba70-76ccc362de4d-utilities\") pod \"certified-operators-6zhf2\" (UID: \"a37140ad-e670-4e18-ba70-76ccc362de4d\") " pod="openshift-marketplace/certified-operators-6zhf2" Feb 23 10:33:13 crc kubenswrapper[4904]: I0223 10:33:13.815885 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a37140ad-e670-4e18-ba70-76ccc362de4d-catalog-content\") pod \"certified-operators-6zhf2\" (UID: \"a37140ad-e670-4e18-ba70-76ccc362de4d\") " pod="openshift-marketplace/certified-operators-6zhf2" Feb 23 10:33:13 crc kubenswrapper[4904]: I0223 10:33:13.841468 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvw45\" (UniqueName: \"kubernetes.io/projected/a37140ad-e670-4e18-ba70-76ccc362de4d-kube-api-access-lvw45\") pod \"certified-operators-6zhf2\" (UID: \"a37140ad-e670-4e18-ba70-76ccc362de4d\") " pod="openshift-marketplace/certified-operators-6zhf2" Feb 23 10:33:13 crc kubenswrapper[4904]: I0223 10:33:13.898013 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6zhf2" Feb 23 10:33:14 crc kubenswrapper[4904]: W0223 10:33:14.430490 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda37140ad_e670_4e18_ba70_76ccc362de4d.slice/crio-ed93400d4d9f8b2c86308ad7cb5644df2e1c34270d88dbecd772d4a568d17d94 WatchSource:0}: Error finding container ed93400d4d9f8b2c86308ad7cb5644df2e1c34270d88dbecd772d4a568d17d94: Status 404 returned error can't find the container with id ed93400d4d9f8b2c86308ad7cb5644df2e1c34270d88dbecd772d4a568d17d94 Feb 23 10:33:14 crc kubenswrapper[4904]: I0223 10:33:14.434282 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6zhf2"] Feb 23 10:33:14 crc kubenswrapper[4904]: I0223 10:33:14.859886 4904 generic.go:334] "Generic (PLEG): container finished" podID="a37140ad-e670-4e18-ba70-76ccc362de4d" containerID="63035889cb78c0c814463c12e0c9d3a2331f929be2754beb794fbf07d2dc3471" exitCode=0 Feb 23 10:33:14 crc kubenswrapper[4904]: I0223 10:33:14.860244 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6zhf2" event={"ID":"a37140ad-e670-4e18-ba70-76ccc362de4d","Type":"ContainerDied","Data":"63035889cb78c0c814463c12e0c9d3a2331f929be2754beb794fbf07d2dc3471"} Feb 23 10:33:14 crc kubenswrapper[4904]: I0223 10:33:14.860286 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6zhf2" event={"ID":"a37140ad-e670-4e18-ba70-76ccc362de4d","Type":"ContainerStarted","Data":"ed93400d4d9f8b2c86308ad7cb5644df2e1c34270d88dbecd772d4a568d17d94"} Feb 23 10:33:15 crc kubenswrapper[4904]: I0223 10:33:15.872476 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6zhf2" event={"ID":"a37140ad-e670-4e18-ba70-76ccc362de4d","Type":"ContainerStarted","Data":"281856e685fedbdc7135cdf42c784f44818f5ff7e28521dcf28826b567797ab4"} Feb 23 10:33:16 crc kubenswrapper[4904]: I0223 10:33:16.683068 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pvqn6" Feb 23 10:33:16 crc kubenswrapper[4904]: I0223 10:33:16.683595 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pvqn6" Feb 23 10:33:16 crc kubenswrapper[4904]: I0223 10:33:16.768199 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pvqn6" Feb 23 10:33:16 crc kubenswrapper[4904]: I0223 10:33:16.952638 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pvqn6" Feb 23 10:33:17 crc kubenswrapper[4904]: I0223 10:33:17.916642 4904 generic.go:334] "Generic (PLEG): container finished" podID="a37140ad-e670-4e18-ba70-76ccc362de4d" containerID="281856e685fedbdc7135cdf42c784f44818f5ff7e28521dcf28826b567797ab4" exitCode=0 Feb 23 10:33:17 crc kubenswrapper[4904]: I0223 10:33:17.916915 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6zhf2" event={"ID":"a37140ad-e670-4e18-ba70-76ccc362de4d","Type":"ContainerDied","Data":"281856e685fedbdc7135cdf42c784f44818f5ff7e28521dcf28826b567797ab4"} Feb 23 10:33:18 crc kubenswrapper[4904]: I0223 10:33:18.942041 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6zhf2" event={"ID":"a37140ad-e670-4e18-ba70-76ccc362de4d","Type":"ContainerStarted","Data":"ee29d0483e5d99cf0cf64cf4bb16f8367a19caa040f00695990193e872f5bc75"} Feb 23 10:33:18 crc kubenswrapper[4904]: I0223 10:33:18.979800 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6zhf2" podStartSLOduration=2.51476696 podStartE2EDuration="5.979783202s" podCreationTimestamp="2026-02-23 10:33:13 +0000 UTC" firstStartedPulling="2026-02-23 10:33:14.864883322 +0000 UTC m=+1628.285256845" lastFinishedPulling="2026-02-23 10:33:18.329899534 +0000 UTC m=+1631.750273087" observedRunningTime="2026-02-23 10:33:18.97410055 +0000 UTC m=+1632.394474063" watchObservedRunningTime="2026-02-23 10:33:18.979783202 +0000 UTC m=+1632.400156715" Feb 23 10:33:19 crc kubenswrapper[4904]: I0223 10:33:19.139689 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvqn6"] Feb 23 10:33:19 crc kubenswrapper[4904]: I0223 10:33:19.140334 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pvqn6" podUID="5232bd37-33fa-4c62-bde3-b9617e9f2f5e" containerName="registry-server" containerID="cri-o://78fc035c78b950dc689294e7b717f861ff5d054a5aacb7d0740128b05ea13cda" gracePeriod=2 Feb 23 10:33:19 crc kubenswrapper[4904]: I0223 10:33:19.596688 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvqn6" Feb 23 10:33:19 crc kubenswrapper[4904]: I0223 10:33:19.768601 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5232bd37-33fa-4c62-bde3-b9617e9f2f5e-utilities\") pod \"5232bd37-33fa-4c62-bde3-b9617e9f2f5e\" (UID: \"5232bd37-33fa-4c62-bde3-b9617e9f2f5e\") " Feb 23 10:33:19 crc kubenswrapper[4904]: I0223 10:33:19.768787 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttqt9\" (UniqueName: \"kubernetes.io/projected/5232bd37-33fa-4c62-bde3-b9617e9f2f5e-kube-api-access-ttqt9\") pod \"5232bd37-33fa-4c62-bde3-b9617e9f2f5e\" (UID: \"5232bd37-33fa-4c62-bde3-b9617e9f2f5e\") " Feb 23 10:33:19 crc kubenswrapper[4904]: I0223 10:33:19.768967 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5232bd37-33fa-4c62-bde3-b9617e9f2f5e-catalog-content\") pod \"5232bd37-33fa-4c62-bde3-b9617e9f2f5e\" (UID: \"5232bd37-33fa-4c62-bde3-b9617e9f2f5e\") " Feb 23 10:33:19 crc kubenswrapper[4904]: I0223 10:33:19.770182 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5232bd37-33fa-4c62-bde3-b9617e9f2f5e-utilities" (OuterVolumeSpecName: "utilities") pod "5232bd37-33fa-4c62-bde3-b9617e9f2f5e" (UID: "5232bd37-33fa-4c62-bde3-b9617e9f2f5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:33:19 crc kubenswrapper[4904]: I0223 10:33:19.775502 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5232bd37-33fa-4c62-bde3-b9617e9f2f5e-kube-api-access-ttqt9" (OuterVolumeSpecName: "kube-api-access-ttqt9") pod "5232bd37-33fa-4c62-bde3-b9617e9f2f5e" (UID: "5232bd37-33fa-4c62-bde3-b9617e9f2f5e"). InnerVolumeSpecName "kube-api-access-ttqt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:33:19 crc kubenswrapper[4904]: I0223 10:33:19.826844 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5232bd37-33fa-4c62-bde3-b9617e9f2f5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5232bd37-33fa-4c62-bde3-b9617e9f2f5e" (UID: "5232bd37-33fa-4c62-bde3-b9617e9f2f5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:33:19 crc kubenswrapper[4904]: I0223 10:33:19.870925 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5232bd37-33fa-4c62-bde3-b9617e9f2f5e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:33:19 crc kubenswrapper[4904]: I0223 10:33:19.870962 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5232bd37-33fa-4c62-bde3-b9617e9f2f5e-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:33:19 crc kubenswrapper[4904]: I0223 10:33:19.870973 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttqt9\" (UniqueName: \"kubernetes.io/projected/5232bd37-33fa-4c62-bde3-b9617e9f2f5e-kube-api-access-ttqt9\") on node \"crc\" DevicePath \"\"" Feb 23 10:33:19 crc kubenswrapper[4904]: I0223 10:33:19.953206 4904 generic.go:334] "Generic (PLEG): container finished" podID="5232bd37-33fa-4c62-bde3-b9617e9f2f5e" containerID="78fc035c78b950dc689294e7b717f861ff5d054a5aacb7d0740128b05ea13cda" exitCode=0 Feb 23 10:33:19 crc kubenswrapper[4904]: I0223 10:33:19.953255 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvqn6" event={"ID":"5232bd37-33fa-4c62-bde3-b9617e9f2f5e","Type":"ContainerDied","Data":"78fc035c78b950dc689294e7b717f861ff5d054a5aacb7d0740128b05ea13cda"} Feb 23 10:33:19 crc kubenswrapper[4904]: I0223 10:33:19.953286 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvqn6" event={"ID":"5232bd37-33fa-4c62-bde3-b9617e9f2f5e","Type":"ContainerDied","Data":"c54f534ae1d8501a634d7176333fdfb2a00e6278773412b383b96d04786fef5a"} Feb 23 10:33:19 crc kubenswrapper[4904]: I0223 10:33:19.953305 4904 scope.go:117] "RemoveContainer" containerID="78fc035c78b950dc689294e7b717f861ff5d054a5aacb7d0740128b05ea13cda" Feb 23 10:33:19 crc kubenswrapper[4904]: I0223 10:33:19.953308 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvqn6" Feb 23 10:33:19 crc kubenswrapper[4904]: I0223 10:33:19.991253 4904 scope.go:117] "RemoveContainer" containerID="ca9377172c18c50d603675b4ac79b954f3f89572d97101d9ff7c58be3b4a7782" Feb 23 10:33:20 crc kubenswrapper[4904]: I0223 10:33:20.002838 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvqn6"] Feb 23 10:33:20 crc kubenswrapper[4904]: I0223 10:33:20.012666 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvqn6"] Feb 23 10:33:20 crc kubenswrapper[4904]: I0223 10:33:20.019332 4904 scope.go:117] "RemoveContainer" containerID="1bc2126d13bb793c63c0b233868d128da56d78ac9ada90d46be75698bbf5bf48" Feb 23 10:33:20 crc kubenswrapper[4904]: I0223 10:33:20.068246 4904 scope.go:117] "RemoveContainer" containerID="78fc035c78b950dc689294e7b717f861ff5d054a5aacb7d0740128b05ea13cda" Feb 23 10:33:20 crc kubenswrapper[4904]: E0223 10:33:20.068799 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78fc035c78b950dc689294e7b717f861ff5d054a5aacb7d0740128b05ea13cda\": container with ID starting with 78fc035c78b950dc689294e7b717f861ff5d054a5aacb7d0740128b05ea13cda not found: ID does not exist" containerID="78fc035c78b950dc689294e7b717f861ff5d054a5aacb7d0740128b05ea13cda" Feb 23 10:33:20 crc kubenswrapper[4904]: I0223 10:33:20.068837 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78fc035c78b950dc689294e7b717f861ff5d054a5aacb7d0740128b05ea13cda"} err="failed to get container status \"78fc035c78b950dc689294e7b717f861ff5d054a5aacb7d0740128b05ea13cda\": rpc error: code = NotFound desc = could not find container \"78fc035c78b950dc689294e7b717f861ff5d054a5aacb7d0740128b05ea13cda\": container with ID starting with 78fc035c78b950dc689294e7b717f861ff5d054a5aacb7d0740128b05ea13cda not found: ID does not exist" Feb 23 10:33:20 crc kubenswrapper[4904]: I0223 10:33:20.068862 4904 scope.go:117] "RemoveContainer" containerID="ca9377172c18c50d603675b4ac79b954f3f89572d97101d9ff7c58be3b4a7782" Feb 23 10:33:20 crc kubenswrapper[4904]: E0223 10:33:20.069387 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca9377172c18c50d603675b4ac79b954f3f89572d97101d9ff7c58be3b4a7782\": container with ID starting with ca9377172c18c50d603675b4ac79b954f3f89572d97101d9ff7c58be3b4a7782 not found: ID does not exist" containerID="ca9377172c18c50d603675b4ac79b954f3f89572d97101d9ff7c58be3b4a7782" Feb 23 10:33:20 crc kubenswrapper[4904]: I0223 10:33:20.069442 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca9377172c18c50d603675b4ac79b954f3f89572d97101d9ff7c58be3b4a7782"} err="failed to get container status \"ca9377172c18c50d603675b4ac79b954f3f89572d97101d9ff7c58be3b4a7782\": rpc error: code = NotFound desc = could not find container \"ca9377172c18c50d603675b4ac79b954f3f89572d97101d9ff7c58be3b4a7782\": container with ID starting with ca9377172c18c50d603675b4ac79b954f3f89572d97101d9ff7c58be3b4a7782 not found: ID does not exist" Feb 23 10:33:20 crc kubenswrapper[4904]: I0223 10:33:20.069478 4904 scope.go:117] "RemoveContainer" containerID="1bc2126d13bb793c63c0b233868d128da56d78ac9ada90d46be75698bbf5bf48" Feb 23 10:33:20 crc kubenswrapper[4904]: E0223 10:33:20.069910 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bc2126d13bb793c63c0b233868d128da56d78ac9ada90d46be75698bbf5bf48\": container with ID starting with 1bc2126d13bb793c63c0b233868d128da56d78ac9ada90d46be75698bbf5bf48 not found: ID does not exist" containerID="1bc2126d13bb793c63c0b233868d128da56d78ac9ada90d46be75698bbf5bf48" Feb 23 10:33:20 crc kubenswrapper[4904]: I0223 10:33:20.069938 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bc2126d13bb793c63c0b233868d128da56d78ac9ada90d46be75698bbf5bf48"} err="failed to get container status \"1bc2126d13bb793c63c0b233868d128da56d78ac9ada90d46be75698bbf5bf48\": rpc error: code = NotFound desc = could not find container \"1bc2126d13bb793c63c0b233868d128da56d78ac9ada90d46be75698bbf5bf48\": container with ID starting with 1bc2126d13bb793c63c0b233868d128da56d78ac9ada90d46be75698bbf5bf48 not found: ID does not exist" Feb 23 10:33:21 crc kubenswrapper[4904]: I0223 10:33:21.274952 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5232bd37-33fa-4c62-bde3-b9617e9f2f5e" path="/var/lib/kubelet/pods/5232bd37-33fa-4c62-bde3-b9617e9f2f5e/volumes" Feb 23 10:33:21 crc kubenswrapper[4904]: I0223 10:33:21.984523 4904 generic.go:334] "Generic (PLEG): container finished" podID="da651589-0c88-4249-9dff-de1c46412cf5" containerID="e7f0c223ec943c5cc61f0cfcd101ae8a3ed1d1cf04e8f847a9c60fbac7926c93" exitCode=0 Feb 23 10:33:21 crc kubenswrapper[4904]: I0223 10:33:21.984592 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg" event={"ID":"da651589-0c88-4249-9dff-de1c46412cf5","Type":"ContainerDied","Data":"e7f0c223ec943c5cc61f0cfcd101ae8a3ed1d1cf04e8f847a9c60fbac7926c93"} Feb 23 10:33:23 crc kubenswrapper[4904]: I0223 10:33:23.507383 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg" Feb 23 10:33:23 crc kubenswrapper[4904]: I0223 10:33:23.568348 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da651589-0c88-4249-9dff-de1c46412cf5-inventory\") pod \"da651589-0c88-4249-9dff-de1c46412cf5\" (UID: \"da651589-0c88-4249-9dff-de1c46412cf5\") " Feb 23 10:33:23 crc kubenswrapper[4904]: I0223 10:33:23.568606 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da651589-0c88-4249-9dff-de1c46412cf5-ssh-key-openstack-edpm-ipam\") pod \"da651589-0c88-4249-9dff-de1c46412cf5\" (UID: \"da651589-0c88-4249-9dff-de1c46412cf5\") " Feb 23 10:33:23 crc kubenswrapper[4904]: I0223 10:33:23.568735 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kb82\" (UniqueName: \"kubernetes.io/projected/da651589-0c88-4249-9dff-de1c46412cf5-kube-api-access-8kb82\") pod \"da651589-0c88-4249-9dff-de1c46412cf5\" (UID: \"da651589-0c88-4249-9dff-de1c46412cf5\") " Feb 23 10:33:23 crc kubenswrapper[4904]: I0223 10:33:23.568766 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da651589-0c88-4249-9dff-de1c46412cf5-bootstrap-combined-ca-bundle\") pod \"da651589-0c88-4249-9dff-de1c46412cf5\" (UID: \"da651589-0c88-4249-9dff-de1c46412cf5\") " Feb 23 10:33:23 crc kubenswrapper[4904]: I0223 10:33:23.579533 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da651589-0c88-4249-9dff-de1c46412cf5-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "da651589-0c88-4249-9dff-de1c46412cf5" (UID: "da651589-0c88-4249-9dff-de1c46412cf5"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:33:23 crc kubenswrapper[4904]: I0223 10:33:23.595579 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da651589-0c88-4249-9dff-de1c46412cf5-kube-api-access-8kb82" (OuterVolumeSpecName: "kube-api-access-8kb82") pod "da651589-0c88-4249-9dff-de1c46412cf5" (UID: "da651589-0c88-4249-9dff-de1c46412cf5"). InnerVolumeSpecName "kube-api-access-8kb82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:33:23 crc kubenswrapper[4904]: I0223 10:33:23.610430 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da651589-0c88-4249-9dff-de1c46412cf5-inventory" (OuterVolumeSpecName: "inventory") pod "da651589-0c88-4249-9dff-de1c46412cf5" (UID: "da651589-0c88-4249-9dff-de1c46412cf5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:33:23 crc kubenswrapper[4904]: I0223 10:33:23.629347 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da651589-0c88-4249-9dff-de1c46412cf5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "da651589-0c88-4249-9dff-de1c46412cf5" (UID: "da651589-0c88-4249-9dff-de1c46412cf5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:33:23 crc kubenswrapper[4904]: I0223 10:33:23.673400 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da651589-0c88-4249-9dff-de1c46412cf5-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 10:33:23 crc kubenswrapper[4904]: I0223 10:33:23.673481 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da651589-0c88-4249-9dff-de1c46412cf5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 10:33:23 crc kubenswrapper[4904]: I0223 10:33:23.673554 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kb82\" (UniqueName: \"kubernetes.io/projected/da651589-0c88-4249-9dff-de1c46412cf5-kube-api-access-8kb82\") on node \"crc\" DevicePath \"\"" Feb 23 10:33:23 crc kubenswrapper[4904]: I0223 10:33:23.673575 4904 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da651589-0c88-4249-9dff-de1c46412cf5-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:33:23 crc kubenswrapper[4904]: I0223 10:33:23.899096 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6zhf2" Feb 23 10:33:23 crc kubenswrapper[4904]: I0223 10:33:23.899172 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6zhf2" Feb 23 10:33:23 crc kubenswrapper[4904]: I0223 10:33:23.986413 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6zhf2" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.014738 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg" event={"ID":"da651589-0c88-4249-9dff-de1c46412cf5","Type":"ContainerDied","Data":"795f547503ac19b0cae899a21c3939610aaf57e3c9c489e1004ee4f7eaf88d20"} Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.015164 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="795f547503ac19b0cae899a21c3939610aaf57e3c9c489e1004ee4f7eaf88d20" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.015516 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.084963 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6zhf2" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.132427 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hszzb"] Feb 23 10:33:24 crc kubenswrapper[4904]: E0223 10:33:24.133069 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5232bd37-33fa-4c62-bde3-b9617e9f2f5e" containerName="extract-utilities" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.133097 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="5232bd37-33fa-4c62-bde3-b9617e9f2f5e" containerName="extract-utilities" Feb 23 10:33:24 crc kubenswrapper[4904]: E0223 10:33:24.133147 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5232bd37-33fa-4c62-bde3-b9617e9f2f5e" containerName="extract-content" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.133156 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="5232bd37-33fa-4c62-bde3-b9617e9f2f5e" containerName="extract-content" Feb 23 10:33:24 crc kubenswrapper[4904]: E0223 10:33:24.133180 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da651589-0c88-4249-9dff-de1c46412cf5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.133190 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="da651589-0c88-4249-9dff-de1c46412cf5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 23 10:33:24 crc kubenswrapper[4904]: E0223 10:33:24.133205 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5232bd37-33fa-4c62-bde3-b9617e9f2f5e" containerName="registry-server" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.133212 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="5232bd37-33fa-4c62-bde3-b9617e9f2f5e" containerName="registry-server" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.133436 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="5232bd37-33fa-4c62-bde3-b9617e9f2f5e" containerName="registry-server" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.133455 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="da651589-0c88-4249-9dff-de1c46412cf5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.134251 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hszzb" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.136891 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.137470 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-c72bm" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.138200 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.139063 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.161482 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hszzb"] Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.286947 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b2f45c5-2ae5-43d7-8845-333ed3242dab-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hszzb\" (UID: \"3b2f45c5-2ae5-43d7-8845-333ed3242dab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hszzb" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.287053 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b2f45c5-2ae5-43d7-8845-333ed3242dab-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hszzb\" (UID: \"3b2f45c5-2ae5-43d7-8845-333ed3242dab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hszzb" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.287197 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncqsl\" (UniqueName: \"kubernetes.io/projected/3b2f45c5-2ae5-43d7-8845-333ed3242dab-kube-api-access-ncqsl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hszzb\" (UID: \"3b2f45c5-2ae5-43d7-8845-333ed3242dab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hszzb" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.390976 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b2f45c5-2ae5-43d7-8845-333ed3242dab-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hszzb\" (UID: \"3b2f45c5-2ae5-43d7-8845-333ed3242dab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hszzb" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.391134 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b2f45c5-2ae5-43d7-8845-333ed3242dab-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hszzb\" (UID: \"3b2f45c5-2ae5-43d7-8845-333ed3242dab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hszzb" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.392089 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncqsl\" (UniqueName: \"kubernetes.io/projected/3b2f45c5-2ae5-43d7-8845-333ed3242dab-kube-api-access-ncqsl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hszzb\" (UID: \"3b2f45c5-2ae5-43d7-8845-333ed3242dab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hszzb" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.395413 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b2f45c5-2ae5-43d7-8845-333ed3242dab-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hszzb\" (UID: \"3b2f45c5-2ae5-43d7-8845-333ed3242dab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hszzb" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.395520 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b2f45c5-2ae5-43d7-8845-333ed3242dab-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hszzb\" (UID: \"3b2f45c5-2ae5-43d7-8845-333ed3242dab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hszzb" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.415403 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncqsl\" (UniqueName: \"kubernetes.io/projected/3b2f45c5-2ae5-43d7-8845-333ed3242dab-kube-api-access-ncqsl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hszzb\" (UID: \"3b2f45c5-2ae5-43d7-8845-333ed3242dab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hszzb" Feb 23 10:33:24 crc kubenswrapper[4904]: I0223 10:33:24.455734 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hszzb" Feb 23 10:33:25 crc kubenswrapper[4904]: I0223 10:33:25.069346 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hszzb"] Feb 23 10:33:25 crc kubenswrapper[4904]: W0223 10:33:25.070151 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b2f45c5_2ae5_43d7_8845_333ed3242dab.slice/crio-5c5161bea417e234d4bf239982117027b7c4d1aa561b3e8c38c5b5f00d812da1 WatchSource:0}: Error finding container 5c5161bea417e234d4bf239982117027b7c4d1aa561b3e8c38c5b5f00d812da1: Status 404 returned error can't find the container with id 5c5161bea417e234d4bf239982117027b7c4d1aa561b3e8c38c5b5f00d812da1 Feb 23 10:33:25 crc kubenswrapper[4904]: I0223 10:33:25.124582 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6zhf2"] Feb 23 10:33:26 crc kubenswrapper[4904]: I0223 10:33:26.043872 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6zhf2" podUID="a37140ad-e670-4e18-ba70-76ccc362de4d" containerName="registry-server" containerID="cri-o://ee29d0483e5d99cf0cf64cf4bb16f8367a19caa040f00695990193e872f5bc75" gracePeriod=2 Feb 23 10:33:26 crc kubenswrapper[4904]: I0223 10:33:26.043984 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hszzb" event={"ID":"3b2f45c5-2ae5-43d7-8845-333ed3242dab","Type":"ContainerStarted","Data":"bcbc1caa79d9b7489a9309977c7a3bba691611b7d4afc5c0e180ed26997181dc"} Feb 23 10:33:26 crc kubenswrapper[4904]: I0223 10:33:26.044478 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hszzb" event={"ID":"3b2f45c5-2ae5-43d7-8845-333ed3242dab","Type":"ContainerStarted","Data":"5c5161bea417e234d4bf239982117027b7c4d1aa561b3e8c38c5b5f00d812da1"} Feb 23 10:33:26 crc kubenswrapper[4904]: I0223 10:33:26.064023 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hszzb" podStartSLOduration=1.60614229 podStartE2EDuration="2.063998573s" podCreationTimestamp="2026-02-23 10:33:24 +0000 UTC" firstStartedPulling="2026-02-23 10:33:25.073683875 +0000 UTC m=+1638.494057428" lastFinishedPulling="2026-02-23 10:33:25.531540198 +0000 UTC m=+1638.951913711" observedRunningTime="2026-02-23 10:33:26.062282554 +0000 UTC m=+1639.482656087" watchObservedRunningTime="2026-02-23 10:33:26.063998573 +0000 UTC m=+1639.484372086" Feb 23 10:33:26 crc kubenswrapper[4904]: I0223 10:33:26.565675 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6zhf2" Feb 23 10:33:26 crc kubenswrapper[4904]: I0223 10:33:26.675910 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a37140ad-e670-4e18-ba70-76ccc362de4d-catalog-content\") pod \"a37140ad-e670-4e18-ba70-76ccc362de4d\" (UID: \"a37140ad-e670-4e18-ba70-76ccc362de4d\") " Feb 23 10:33:26 crc kubenswrapper[4904]: I0223 10:33:26.676421 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvw45\" (UniqueName: \"kubernetes.io/projected/a37140ad-e670-4e18-ba70-76ccc362de4d-kube-api-access-lvw45\") pod \"a37140ad-e670-4e18-ba70-76ccc362de4d\" (UID: \"a37140ad-e670-4e18-ba70-76ccc362de4d\") " Feb 23 10:33:26 crc kubenswrapper[4904]: I0223 10:33:26.677557 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a37140ad-e670-4e18-ba70-76ccc362de4d-utilities\") pod \"a37140ad-e670-4e18-ba70-76ccc362de4d\" (UID: \"a37140ad-e670-4e18-ba70-76ccc362de4d\") " Feb 23 10:33:26 crc kubenswrapper[4904]: I0223 10:33:26.679602 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a37140ad-e670-4e18-ba70-76ccc362de4d-utilities" (OuterVolumeSpecName: "utilities") pod "a37140ad-e670-4e18-ba70-76ccc362de4d" (UID: "a37140ad-e670-4e18-ba70-76ccc362de4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:33:26 crc kubenswrapper[4904]: I0223 10:33:26.685924 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a37140ad-e670-4e18-ba70-76ccc362de4d-kube-api-access-lvw45" (OuterVolumeSpecName: "kube-api-access-lvw45") pod "a37140ad-e670-4e18-ba70-76ccc362de4d" (UID: "a37140ad-e670-4e18-ba70-76ccc362de4d"). InnerVolumeSpecName "kube-api-access-lvw45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:33:26 crc kubenswrapper[4904]: I0223 10:33:26.733811 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a37140ad-e670-4e18-ba70-76ccc362de4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a37140ad-e670-4e18-ba70-76ccc362de4d" (UID: "a37140ad-e670-4e18-ba70-76ccc362de4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:33:26 crc kubenswrapper[4904]: I0223 10:33:26.780261 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a37140ad-e670-4e18-ba70-76ccc362de4d-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:33:26 crc kubenswrapper[4904]: I0223 10:33:26.780305 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a37140ad-e670-4e18-ba70-76ccc362de4d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:33:26 crc kubenswrapper[4904]: I0223 10:33:26.780325 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvw45\" (UniqueName: \"kubernetes.io/projected/a37140ad-e670-4e18-ba70-76ccc362de4d-kube-api-access-lvw45\") on node \"crc\" DevicePath \"\"" Feb 23 10:33:27 crc kubenswrapper[4904]: I0223 10:33:27.056835 4904 generic.go:334] "Generic (PLEG): container finished" podID="a37140ad-e670-4e18-ba70-76ccc362de4d" containerID="ee29d0483e5d99cf0cf64cf4bb16f8367a19caa040f00695990193e872f5bc75" exitCode=0 Feb 23 10:33:27 crc kubenswrapper[4904]: I0223 10:33:27.056970 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6zhf2" Feb 23 10:33:27 crc kubenswrapper[4904]: I0223 10:33:27.057246 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6zhf2" event={"ID":"a37140ad-e670-4e18-ba70-76ccc362de4d","Type":"ContainerDied","Data":"ee29d0483e5d99cf0cf64cf4bb16f8367a19caa040f00695990193e872f5bc75"} Feb 23 10:33:27 crc kubenswrapper[4904]: I0223 10:33:27.057366 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6zhf2" event={"ID":"a37140ad-e670-4e18-ba70-76ccc362de4d","Type":"ContainerDied","Data":"ed93400d4d9f8b2c86308ad7cb5644df2e1c34270d88dbecd772d4a568d17d94"} Feb 23 10:33:27 crc kubenswrapper[4904]: I0223 10:33:27.057511 4904 scope.go:117] "RemoveContainer" containerID="ee29d0483e5d99cf0cf64cf4bb16f8367a19caa040f00695990193e872f5bc75" Feb 23 10:33:27 crc kubenswrapper[4904]: I0223 10:33:27.081019 4904 scope.go:117] "RemoveContainer" containerID="281856e685fedbdc7135cdf42c784f44818f5ff7e28521dcf28826b567797ab4" Feb 23 10:33:27 crc kubenswrapper[4904]: I0223 10:33:27.117776 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6zhf2"] Feb 23 10:33:27 crc kubenswrapper[4904]: I0223 10:33:27.121444 4904 scope.go:117] "RemoveContainer" containerID="63035889cb78c0c814463c12e0c9d3a2331f929be2754beb794fbf07d2dc3471" Feb 23 10:33:27 crc kubenswrapper[4904]: I0223 10:33:27.127875 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6zhf2"] Feb 23 10:33:27 crc kubenswrapper[4904]: I0223 10:33:27.168975 4904 scope.go:117] "RemoveContainer" containerID="ee29d0483e5d99cf0cf64cf4bb16f8367a19caa040f00695990193e872f5bc75" Feb 23 10:33:27 crc kubenswrapper[4904]: E0223 10:33:27.169386 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee29d0483e5d99cf0cf64cf4bb16f8367a19caa040f00695990193e872f5bc75\": container with ID starting with ee29d0483e5d99cf0cf64cf4bb16f8367a19caa040f00695990193e872f5bc75 not found: ID does not exist" containerID="ee29d0483e5d99cf0cf64cf4bb16f8367a19caa040f00695990193e872f5bc75" Feb 23 10:33:27 crc kubenswrapper[4904]: I0223 10:33:27.169453 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee29d0483e5d99cf0cf64cf4bb16f8367a19caa040f00695990193e872f5bc75"} err="failed to get container status \"ee29d0483e5d99cf0cf64cf4bb16f8367a19caa040f00695990193e872f5bc75\": rpc error: code = NotFound desc = could not find container \"ee29d0483e5d99cf0cf64cf4bb16f8367a19caa040f00695990193e872f5bc75\": container with ID starting with ee29d0483e5d99cf0cf64cf4bb16f8367a19caa040f00695990193e872f5bc75 not found: ID does not exist" Feb 23 10:33:27 crc kubenswrapper[4904]: I0223 10:33:27.169484 4904 scope.go:117] "RemoveContainer" containerID="281856e685fedbdc7135cdf42c784f44818f5ff7e28521dcf28826b567797ab4" Feb 23 10:33:27 crc kubenswrapper[4904]: E0223 10:33:27.169886 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"281856e685fedbdc7135cdf42c784f44818f5ff7e28521dcf28826b567797ab4\": container with ID starting with 281856e685fedbdc7135cdf42c784f44818f5ff7e28521dcf28826b567797ab4 not found: ID does not exist" containerID="281856e685fedbdc7135cdf42c784f44818f5ff7e28521dcf28826b567797ab4" Feb 23 10:33:27 crc kubenswrapper[4904]: I0223 10:33:27.169915 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"281856e685fedbdc7135cdf42c784f44818f5ff7e28521dcf28826b567797ab4"} err="failed to get container status \"281856e685fedbdc7135cdf42c784f44818f5ff7e28521dcf28826b567797ab4\": rpc error: code = NotFound desc = could not find container \"281856e685fedbdc7135cdf42c784f44818f5ff7e28521dcf28826b567797ab4\": container with ID starting with 281856e685fedbdc7135cdf42c784f44818f5ff7e28521dcf28826b567797ab4 not found: ID does not exist" Feb 23 10:33:27 crc kubenswrapper[4904]: I0223 10:33:27.169933 4904 scope.go:117] "RemoveContainer" containerID="63035889cb78c0c814463c12e0c9d3a2331f929be2754beb794fbf07d2dc3471" Feb 23 10:33:27 crc kubenswrapper[4904]: E0223 10:33:27.170173 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63035889cb78c0c814463c12e0c9d3a2331f929be2754beb794fbf07d2dc3471\": container with ID starting with 63035889cb78c0c814463c12e0c9d3a2331f929be2754beb794fbf07d2dc3471 not found: ID does not exist" containerID="63035889cb78c0c814463c12e0c9d3a2331f929be2754beb794fbf07d2dc3471" Feb 23 10:33:27 crc kubenswrapper[4904]: I0223 10:33:27.170203 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63035889cb78c0c814463c12e0c9d3a2331f929be2754beb794fbf07d2dc3471"} err="failed to get container status \"63035889cb78c0c814463c12e0c9d3a2331f929be2754beb794fbf07d2dc3471\": rpc error: code = NotFound desc = could not find container \"63035889cb78c0c814463c12e0c9d3a2331f929be2754beb794fbf07d2dc3471\": container with ID starting with 63035889cb78c0c814463c12e0c9d3a2331f929be2754beb794fbf07d2dc3471 not found: ID does not exist" Feb 23 10:33:27 crc kubenswrapper[4904]: I0223 10:33:27.268296 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:33:27 crc kubenswrapper[4904]: E0223 10:33:27.268583 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:33:27 crc kubenswrapper[4904]: I0223 10:33:27.269660 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a37140ad-e670-4e18-ba70-76ccc362de4d" path="/var/lib/kubelet/pods/a37140ad-e670-4e18-ba70-76ccc362de4d/volumes" Feb 23 10:33:42 crc kubenswrapper[4904]: I0223 10:33:42.255396 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:33:42 crc kubenswrapper[4904]: E0223 10:33:42.256846 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:33:43 crc kubenswrapper[4904]: I0223 10:33:43.202588 4904 scope.go:117] "RemoveContainer" containerID="f93a7defda0bca9c2d478c580efe945c2f5369312ad818f56eedab10ea8a1603" Feb 23 10:33:43 crc kubenswrapper[4904]: I0223 10:33:43.231776 4904 scope.go:117] "RemoveContainer" containerID="964e6f5458863f11647447362d607d4fb4a15dda069cbd90a2e7693f5c49b51f" Feb 23 10:33:43 crc kubenswrapper[4904]: I0223 10:33:43.274988 4904 scope.go:117] "RemoveContainer" containerID="ba58694585c0325fdba58ea818bf3f5001c9641cbf33143a6865308944cb1e23" Feb 23 10:33:43 crc kubenswrapper[4904]: I0223 10:33:43.323207 4904 scope.go:117] "RemoveContainer" containerID="436e3e49b99405d98962a234bce04ed5f26cc825f40c6af0f782cf917739aa77" Feb 23 10:33:43 crc kubenswrapper[4904]: I0223 10:33:43.347685 4904 scope.go:117] "RemoveContainer" containerID="ae87a4aeccb3cdb31bd1a9c78763e7363ccdb2bcd6e4e18d13cb13aecaf55ec2" Feb 23 10:33:43 crc kubenswrapper[4904]: I0223 10:33:43.364566 4904 scope.go:117] "RemoveContainer" containerID="4ba716a5c2914840e8c52237c75abe9b3cc615bc283bc54dae325338b8f2cfeb" Feb 23 10:33:43 crc kubenswrapper[4904]: I0223 10:33:43.387726 4904 scope.go:117] "RemoveContainer" containerID="dd2f5440f82603511f11622af292f0ed0d1a0833269abd220b84be0b3771841e" Feb 23 10:33:43 crc kubenswrapper[4904]: I0223 10:33:43.407011 4904 scope.go:117] "RemoveContainer" containerID="d87c51e9b54e2073525ab467a81175d147b5890affcb3be45042e11bdcdf9b9a" Feb 23 10:33:43 crc kubenswrapper[4904]: I0223 10:33:43.423947 4904 scope.go:117] "RemoveContainer" containerID="f881512fa121667304f4aa20732910cee9f687bf22ce0f8457a7968f0192f2e2" Feb 23 10:33:43 crc kubenswrapper[4904]: I0223 10:33:43.443850 4904 scope.go:117] "RemoveContainer" containerID="5876f58320ba2fda9d5a3bde5a784d7eea607299efab8816380aac2c39de4f74" Feb 23 10:33:54 crc kubenswrapper[4904]: I0223 10:33:54.065362 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-9169-account-create-update-j44z4"] Feb 23 10:33:54 crc kubenswrapper[4904]: I0223 10:33:54.079306 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-a4fe-account-create-update-xbczb"] Feb 23 10:33:54 crc kubenswrapper[4904]: I0223 10:33:54.097131 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-a4fe-account-create-update-xbczb"] Feb 23 10:33:54 crc kubenswrapper[4904]: I0223 10:33:54.116848 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-9169-account-create-update-j44z4"] Feb 23 10:33:55 crc kubenswrapper[4904]: I0223 10:33:55.034200 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-c9pnc"] Feb 23 10:33:55 crc kubenswrapper[4904]: I0223 10:33:55.044329 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-c9pnc"] Feb 23 10:33:55 crc kubenswrapper[4904]: I0223 10:33:55.279413 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83e2ded5-4041-484c-b117-6df53876c328" path="/var/lib/kubelet/pods/83e2ded5-4041-484c-b117-6df53876c328/volumes" Feb 23 10:33:55 crc kubenswrapper[4904]: I0223 10:33:55.282249 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94708a44-bcf8-4084-8ba8-8c1ffdcf70e4" path="/var/lib/kubelet/pods/94708a44-bcf8-4084-8ba8-8c1ffdcf70e4/volumes" Feb 23 10:33:55 crc kubenswrapper[4904]: I0223 10:33:55.283924 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3ef6d86-3b22-41cd-9a28-f4e7844ec25f" path="/var/lib/kubelet/pods/b3ef6d86-3b22-41cd-9a28-f4e7844ec25f/volumes" Feb 23 10:33:56 crc kubenswrapper[4904]: I0223 10:33:56.256377 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:33:56 crc kubenswrapper[4904]: E0223 10:33:56.257166 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:34:00 crc kubenswrapper[4904]: I0223 10:34:00.048534 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-vj425"] Feb 23 10:34:00 crc kubenswrapper[4904]: I0223 10:34:00.062309 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-vj425"] Feb 23 10:34:01 crc kubenswrapper[4904]: I0223 10:34:01.066962 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-727gm"] Feb 23 10:34:01 crc kubenswrapper[4904]: I0223 10:34:01.075456 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-wtd9h"] Feb 23 10:34:01 crc kubenswrapper[4904]: I0223 10:34:01.083454 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e9fa-account-create-update-n2cwd"] Feb 23 10:34:01 crc kubenswrapper[4904]: I0223 10:34:01.090855 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-727gm"] Feb 23 10:34:01 crc kubenswrapper[4904]: I0223 10:34:01.098034 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e9fa-account-create-update-n2cwd"] Feb 23 10:34:01 crc kubenswrapper[4904]: I0223 10:34:01.104950 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9895-account-create-update-8k6j7"] Feb 23 10:34:01 crc kubenswrapper[4904]: I0223 10:34:01.111997 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-wtd9h"] Feb 23 10:34:01 crc kubenswrapper[4904]: I0223 10:34:01.118855 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9895-account-create-update-8k6j7"] Feb 23 10:34:01 crc kubenswrapper[4904]: I0223 10:34:01.281554 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f39f07b-ae9c-44dd-a512-2642f0a78b07" path="/var/lib/kubelet/pods/2f39f07b-ae9c-44dd-a512-2642f0a78b07/volumes" Feb 23 10:34:01 crc kubenswrapper[4904]: I0223 10:34:01.282181 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40413731-ae44-4eca-a751-3ba3a4e42558" path="/var/lib/kubelet/pods/40413731-ae44-4eca-a751-3ba3a4e42558/volumes" Feb 23 10:34:01 crc kubenswrapper[4904]: I0223 10:34:01.283116 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e9e4686-034e-4478-a278-f2ceb0516da7" path="/var/lib/kubelet/pods/6e9e4686-034e-4478-a278-f2ceb0516da7/volumes" Feb 23 10:34:01 crc kubenswrapper[4904]: I0223 10:34:01.362455 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83019808-1db2-46ed-87bd-ef75476802e7" path="/var/lib/kubelet/pods/83019808-1db2-46ed-87bd-ef75476802e7/volumes" Feb 23 10:34:01 crc kubenswrapper[4904]: I0223 10:34:01.363757 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c53aa525-6ef1-4984-803a-fc0466c201b3" path="/var/lib/kubelet/pods/c53aa525-6ef1-4984-803a-fc0466c201b3/volumes" Feb 23 10:34:09 crc kubenswrapper[4904]: I0223 10:34:09.255136 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:34:09 crc kubenswrapper[4904]: E0223 10:34:09.255939 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:34:12 crc kubenswrapper[4904]: I0223 10:34:12.044260 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lqtz2"] Feb 23 10:34:12 crc kubenswrapper[4904]: I0223 10:34:12.056948 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lqtz2"] Feb 23 10:34:13 crc kubenswrapper[4904]: I0223 10:34:13.273201 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d21f024-3e6c-41e0-a11b-0032bd1ec7df" path="/var/lib/kubelet/pods/4d21f024-3e6c-41e0-a11b-0032bd1ec7df/volumes" Feb 23 10:34:23 crc kubenswrapper[4904]: I0223 10:34:23.256349 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:34:23 crc kubenswrapper[4904]: E0223 10:34:23.257562 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:34:29 crc kubenswrapper[4904]: I0223 10:34:29.049366 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db0b-account-create-update-xj8cx"] Feb 23 10:34:29 crc kubenswrapper[4904]: I0223 10:34:29.066443 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db0b-account-create-update-xj8cx"] Feb 23 10:34:29 crc kubenswrapper[4904]: I0223 10:34:29.270114 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85242600-b4c3-4941-93fa-1cf4cb16e6cc" path="/var/lib/kubelet/pods/85242600-b4c3-4941-93fa-1cf4cb16e6cc/volumes" Feb 23 10:34:32 crc kubenswrapper[4904]: I0223 10:34:32.055463 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7058-account-create-update-9h8tl"] Feb 23 10:34:32 crc kubenswrapper[4904]: I0223 10:34:32.068402 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-rbxsv"] Feb 23 10:34:32 crc kubenswrapper[4904]: I0223 10:34:32.076971 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7058-account-create-update-9h8tl"] Feb 23 10:34:32 crc kubenswrapper[4904]: I0223 10:34:32.084512 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-rbxsv"] Feb 23 10:34:33 crc kubenswrapper[4904]: I0223 10:34:33.277253 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aefea65-709c-4842-88b3-75ecd926e2de" path="/var/lib/kubelet/pods/4aefea65-709c-4842-88b3-75ecd926e2de/volumes" Feb 23 10:34:33 crc kubenswrapper[4904]: I0223 10:34:33.279128 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6" path="/var/lib/kubelet/pods/62f3f6e3-5b78-4a9f-8985-0fe4d9684ac6/volumes" Feb 23 10:34:35 crc kubenswrapper[4904]: I0223 10:34:35.255543 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:34:35 crc kubenswrapper[4904]: E0223 10:34:35.256243 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:34:36 crc kubenswrapper[4904]: I0223 10:34:36.043185 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-wczj2"] Feb 23 10:34:36 crc kubenswrapper[4904]: I0223 10:34:36.053399 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-mkr6g"] Feb 23 10:34:36 crc kubenswrapper[4904]: I0223 10:34:36.067459 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-mkr6g"] Feb 23 10:34:36 crc kubenswrapper[4904]: I0223 10:34:36.082430 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-wczj2"] Feb 23 10:34:37 crc kubenswrapper[4904]: I0223 10:34:37.288236 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48c89630-853b-4415-88b0-2282c4a9e9f9" path="/var/lib/kubelet/pods/48c89630-853b-4415-88b0-2282c4a9e9f9/volumes" Feb 23 10:34:37 crc kubenswrapper[4904]: I0223 10:34:37.289368 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e" path="/var/lib/kubelet/pods/d9ba78b3-bc99-48c2-8b26-4b8f19c02e4e/volumes" Feb 23 10:34:43 crc kubenswrapper[4904]: I0223 10:34:43.046190 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-65pr4"] Feb 23 10:34:43 crc kubenswrapper[4904]: I0223 10:34:43.060748 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-65pr4"] Feb 23 10:34:43 crc kubenswrapper[4904]: I0223 10:34:43.073550 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-3b16-account-create-update-z7z8t"] Feb 23 10:34:43 crc kubenswrapper[4904]: I0223 10:34:43.085293 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-3b16-account-create-update-z7z8t"] Feb 23 10:34:43 crc kubenswrapper[4904]: I0223 10:34:43.307406 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40418ddd-1e86-4037-b7fc-8f2ff25f6b3c" path="/var/lib/kubelet/pods/40418ddd-1e86-4037-b7fc-8f2ff25f6b3c/volumes" Feb 23 10:34:43 crc kubenswrapper[4904]: I0223 10:34:43.309570 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51d23998-f8e9-45bc-9af7-f37221dc0390" path="/var/lib/kubelet/pods/51d23998-f8e9-45bc-9af7-f37221dc0390/volumes" Feb 23 10:34:43 crc kubenswrapper[4904]: I0223 10:34:43.600454 4904 scope.go:117] "RemoveContainer" containerID="a18a085f816229cf166da1f20a67ea25169961a530168ed8c2ec858485860440" Feb 23 10:34:43 crc kubenswrapper[4904]: I0223 10:34:43.659686 4904 scope.go:117] "RemoveContainer" containerID="dbaaa8f2caffa0897653f38fd70a874922d96d82228a8d631cb60e2abb18acac" Feb 23 10:34:43 crc kubenswrapper[4904]: I0223 10:34:43.715764 4904 scope.go:117] "RemoveContainer" containerID="d21eabfba6e7afc6502e5192ff5344a464376ffe90ad8681c214b7d3b00054bd" Feb 23 10:34:43 crc kubenswrapper[4904]: I0223 10:34:43.756016 4904 scope.go:117] "RemoveContainer" containerID="81dc5f182913791237c02172c9322ccbc5479d5cfa8e4715a433a1b68eb5c1b2" Feb 23 10:34:43 crc kubenswrapper[4904]: I0223 10:34:43.808671 4904 scope.go:117] "RemoveContainer" containerID="ea9bf472e6bda64a89f4c1bc320f60e50b66e475ed3b0c21c5851a1b869d8c0c" Feb 23 10:34:43 crc kubenswrapper[4904]: I0223 10:34:43.852442 4904 scope.go:117] "RemoveContainer" containerID="30091519ac939d0a7863780523865aa01b08fc4f4b609956cc7aeef3d16e8b53" Feb 23 10:34:43 crc kubenswrapper[4904]: I0223 10:34:43.895677 4904 scope.go:117] "RemoveContainer" containerID="a9a6a4a9976c5fefe8ed2595cf5486ade6e7661d1e4b27aa80b6557dc5f0c767" Feb 23 10:34:43 crc kubenswrapper[4904]: I0223 10:34:43.933447 4904 scope.go:117] "RemoveContainer" containerID="781e12200c006e5fc6fb53626ef5ac2414111464ba67b4efa6b1ccd7c803d831" Feb 23 10:34:43 crc kubenswrapper[4904]: I0223 10:34:43.970916 4904 scope.go:117] "RemoveContainer" containerID="99221c4a0d63aefd0b780abbc1ade1fca7299f3b0c83fb00bef8e9e5c58028ec" Feb 23 10:34:44 crc kubenswrapper[4904]: I0223 10:34:44.002829 4904 scope.go:117] "RemoveContainer" containerID="287b17de3d479a4678a9cce8e78fc2c7bda1eab41e76e77259bac2a25abbd69f" Feb 23 10:34:44 crc kubenswrapper[4904]: I0223 10:34:44.031303 4904 scope.go:117] "RemoveContainer" containerID="7ebb474f0b91f4489d7845f31ad72411e599a1b648d9968324346be5acdd48e5" Feb 23 10:34:44 crc kubenswrapper[4904]: I0223 10:34:44.070638 4904 scope.go:117] "RemoveContainer" containerID="abace7ea57a9d1b994083f2eb64ab5689be1c30b0a24fdd96c1e2e601746dcea" Feb 23 10:34:44 crc kubenswrapper[4904]: I0223 10:34:44.094884 4904 scope.go:117] "RemoveContainer" containerID="1404d599ef0469f9373629e84dff01fdb884d2a9f9cf9f77b3a37eb69bcdb9d8" Feb 23 10:34:44 crc kubenswrapper[4904]: I0223 10:34:44.118745 4904 scope.go:117] "RemoveContainer" containerID="9b2ed722df78c25b0cd8691be0e3eb66b413f0f4c3c563f9053691c397dd82a2" Feb 23 10:34:44 crc kubenswrapper[4904]: I0223 10:34:44.138307 4904 scope.go:117] "RemoveContainer" containerID="9f9b33d6ed1075c890d231b22368e9b05b596843faa41986ce8b05deaf62edf9" Feb 23 10:34:44 crc kubenswrapper[4904]: I0223 10:34:44.159211 4904 scope.go:117] "RemoveContainer" containerID="eb2a9783514dac00ea7ca2c8b57763124112cf22b8307716d3f78d5fd03aff4a" Feb 23 10:34:46 crc kubenswrapper[4904]: I0223 10:34:46.255677 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:34:46 crc kubenswrapper[4904]: E0223 10:34:46.256466 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:34:47 crc kubenswrapper[4904]: I0223 10:34:47.038960 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-rf6dz"] Feb 23 10:34:47 crc kubenswrapper[4904]: I0223 10:34:47.060649 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-rf6dz"] Feb 23 10:34:47 crc kubenswrapper[4904]: I0223 10:34:47.278163 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ea2afdb-ef40-490c-9b9c-5acd41bcbee7" path="/var/lib/kubelet/pods/1ea2afdb-ef40-490c-9b9c-5acd41bcbee7/volumes" Feb 23 10:34:48 crc kubenswrapper[4904]: I0223 10:34:48.127012 4904 generic.go:334] "Generic (PLEG): container finished" podID="3b2f45c5-2ae5-43d7-8845-333ed3242dab" containerID="bcbc1caa79d9b7489a9309977c7a3bba691611b7d4afc5c0e180ed26997181dc" exitCode=0 Feb 23 10:34:48 crc kubenswrapper[4904]: I0223 10:34:48.127100 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hszzb" event={"ID":"3b2f45c5-2ae5-43d7-8845-333ed3242dab","Type":"ContainerDied","Data":"bcbc1caa79d9b7489a9309977c7a3bba691611b7d4afc5c0e180ed26997181dc"} Feb 23 10:34:49 crc kubenswrapper[4904]: I0223 10:34:49.623236 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hszzb" Feb 23 10:34:49 crc kubenswrapper[4904]: I0223 10:34:49.750137 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b2f45c5-2ae5-43d7-8845-333ed3242dab-ssh-key-openstack-edpm-ipam\") pod \"3b2f45c5-2ae5-43d7-8845-333ed3242dab\" (UID: \"3b2f45c5-2ae5-43d7-8845-333ed3242dab\") " Feb 23 10:34:49 crc kubenswrapper[4904]: I0223 10:34:49.750311 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncqsl\" (UniqueName: \"kubernetes.io/projected/3b2f45c5-2ae5-43d7-8845-333ed3242dab-kube-api-access-ncqsl\") pod \"3b2f45c5-2ae5-43d7-8845-333ed3242dab\" (UID: \"3b2f45c5-2ae5-43d7-8845-333ed3242dab\") " Feb 23 10:34:49 crc kubenswrapper[4904]: I0223 10:34:49.750349 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b2f45c5-2ae5-43d7-8845-333ed3242dab-inventory\") pod \"3b2f45c5-2ae5-43d7-8845-333ed3242dab\" (UID: \"3b2f45c5-2ae5-43d7-8845-333ed3242dab\") " Feb 23 10:34:49 crc kubenswrapper[4904]: I0223 10:34:49.756570 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b2f45c5-2ae5-43d7-8845-333ed3242dab-kube-api-access-ncqsl" (OuterVolumeSpecName: "kube-api-access-ncqsl") pod "3b2f45c5-2ae5-43d7-8845-333ed3242dab" (UID: "3b2f45c5-2ae5-43d7-8845-333ed3242dab"). InnerVolumeSpecName "kube-api-access-ncqsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:34:49 crc kubenswrapper[4904]: I0223 10:34:49.793550 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2f45c5-2ae5-43d7-8845-333ed3242dab-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3b2f45c5-2ae5-43d7-8845-333ed3242dab" (UID: "3b2f45c5-2ae5-43d7-8845-333ed3242dab"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:34:49 crc kubenswrapper[4904]: I0223 10:34:49.798127 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2f45c5-2ae5-43d7-8845-333ed3242dab-inventory" (OuterVolumeSpecName: "inventory") pod "3b2f45c5-2ae5-43d7-8845-333ed3242dab" (UID: "3b2f45c5-2ae5-43d7-8845-333ed3242dab"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:34:49 crc kubenswrapper[4904]: I0223 10:34:49.853937 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncqsl\" (UniqueName: \"kubernetes.io/projected/3b2f45c5-2ae5-43d7-8845-333ed3242dab-kube-api-access-ncqsl\") on node \"crc\" DevicePath \"\"" Feb 23 10:34:49 crc kubenswrapper[4904]: I0223 10:34:49.853985 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b2f45c5-2ae5-43d7-8845-333ed3242dab-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 10:34:49 crc kubenswrapper[4904]: I0223 10:34:49.853999 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b2f45c5-2ae5-43d7-8845-333ed3242dab-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.155424 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hszzb" event={"ID":"3b2f45c5-2ae5-43d7-8845-333ed3242dab","Type":"ContainerDied","Data":"5c5161bea417e234d4bf239982117027b7c4d1aa561b3e8c38c5b5f00d812da1"} Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.155483 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c5161bea417e234d4bf239982117027b7c4d1aa561b3e8c38c5b5f00d812da1" Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.155529 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hszzb" Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.297662 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f"] Feb 23 10:34:50 crc kubenswrapper[4904]: E0223 10:34:50.298471 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37140ad-e670-4e18-ba70-76ccc362de4d" containerName="registry-server" Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.298504 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37140ad-e670-4e18-ba70-76ccc362de4d" containerName="registry-server" Feb 23 10:34:50 crc kubenswrapper[4904]: E0223 10:34:50.298527 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37140ad-e670-4e18-ba70-76ccc362de4d" containerName="extract-content" Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.298541 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37140ad-e670-4e18-ba70-76ccc362de4d" containerName="extract-content" Feb 23 10:34:50 crc kubenswrapper[4904]: E0223 10:34:50.298600 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b2f45c5-2ae5-43d7-8845-333ed3242dab" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.298617 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b2f45c5-2ae5-43d7-8845-333ed3242dab" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 23 10:34:50 crc kubenswrapper[4904]: E0223 10:34:50.298651 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37140ad-e670-4e18-ba70-76ccc362de4d" containerName="extract-utilities" Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.298665 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37140ad-e670-4e18-ba70-76ccc362de4d" containerName="extract-utilities" Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.299067 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b2f45c5-2ae5-43d7-8845-333ed3242dab" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.299121 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="a37140ad-e670-4e18-ba70-76ccc362de4d" containerName="registry-server" Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.300388 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f" Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.303780 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.303865 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.303918 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.307282 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-c72bm" Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.316744 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f"] Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.468840 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tlh5\" (UniqueName: \"kubernetes.io/projected/3abf50a2-3000-41eb-97fc-814e2b55cd58-kube-api-access-8tlh5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f\" (UID: \"3abf50a2-3000-41eb-97fc-814e2b55cd58\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f" Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.468965 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3abf50a2-3000-41eb-97fc-814e2b55cd58-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f\" (UID: \"3abf50a2-3000-41eb-97fc-814e2b55cd58\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f" Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.469020 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3abf50a2-3000-41eb-97fc-814e2b55cd58-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f\" (UID: \"3abf50a2-3000-41eb-97fc-814e2b55cd58\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f" Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.571894 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3abf50a2-3000-41eb-97fc-814e2b55cd58-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f\" (UID: \"3abf50a2-3000-41eb-97fc-814e2b55cd58\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f" Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.571991 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3abf50a2-3000-41eb-97fc-814e2b55cd58-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f\" (UID: \"3abf50a2-3000-41eb-97fc-814e2b55cd58\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f" Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.572340 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tlh5\" (UniqueName: \"kubernetes.io/projected/3abf50a2-3000-41eb-97fc-814e2b55cd58-kube-api-access-8tlh5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f\" (UID: \"3abf50a2-3000-41eb-97fc-814e2b55cd58\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f" Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.580959 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3abf50a2-3000-41eb-97fc-814e2b55cd58-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f\" (UID: \"3abf50a2-3000-41eb-97fc-814e2b55cd58\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f" Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.581003 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3abf50a2-3000-41eb-97fc-814e2b55cd58-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f\" (UID: \"3abf50a2-3000-41eb-97fc-814e2b55cd58\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f" Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.599068 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tlh5\" (UniqueName: \"kubernetes.io/projected/3abf50a2-3000-41eb-97fc-814e2b55cd58-kube-api-access-8tlh5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f\" (UID: \"3abf50a2-3000-41eb-97fc-814e2b55cd58\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f" Feb 23 10:34:50 crc kubenswrapper[4904]: I0223 10:34:50.641559 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f" Feb 23 10:34:51 crc kubenswrapper[4904]: I0223 10:34:51.305081 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f"] Feb 23 10:34:52 crc kubenswrapper[4904]: I0223 10:34:52.186863 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f" event={"ID":"3abf50a2-3000-41eb-97fc-814e2b55cd58","Type":"ContainerStarted","Data":"4cc8d47b00acee9156692f4204cc50e6e203b52e3767ff97f3994b5797c2607e"} Feb 23 10:34:52 crc kubenswrapper[4904]: I0223 10:34:52.187550 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f" event={"ID":"3abf50a2-3000-41eb-97fc-814e2b55cd58","Type":"ContainerStarted","Data":"334c13eba527fd12b6a12a15c16e64736415b475fd26f6352bd823b3b6740f47"} Feb 23 10:34:52 crc kubenswrapper[4904]: I0223 10:34:52.224178 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f" podStartSLOduration=1.740896263 podStartE2EDuration="2.224146339s" podCreationTimestamp="2026-02-23 10:34:50 +0000 UTC" firstStartedPulling="2026-02-23 10:34:51.298275035 +0000 UTC m=+1724.718648558" lastFinishedPulling="2026-02-23 10:34:51.781525081 +0000 UTC m=+1725.201898634" observedRunningTime="2026-02-23 10:34:52.206786855 +0000 UTC m=+1725.627160408" watchObservedRunningTime="2026-02-23 10:34:52.224146339 +0000 UTC m=+1725.644519932" Feb 23 10:35:00 crc kubenswrapper[4904]: I0223 10:35:00.076664 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-kt929"] Feb 23 10:35:00 crc kubenswrapper[4904]: I0223 10:35:00.097369 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-kt929"] Feb 23 10:35:01 crc kubenswrapper[4904]: I0223 10:35:01.257426 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:35:01 crc kubenswrapper[4904]: E0223 10:35:01.258579 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:35:01 crc kubenswrapper[4904]: I0223 10:35:01.276565 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d1a74dc-f43f-4b29-ab12-8d5169a4b69d" path="/var/lib/kubelet/pods/9d1a74dc-f43f-4b29-ab12-8d5169a4b69d/volumes" Feb 23 10:35:14 crc kubenswrapper[4904]: I0223 10:35:14.256303 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:35:14 crc kubenswrapper[4904]: E0223 10:35:14.257650 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:35:26 crc kubenswrapper[4904]: I0223 10:35:26.256020 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:35:26 crc kubenswrapper[4904]: E0223 10:35:26.257350 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:35:32 crc kubenswrapper[4904]: I0223 10:35:32.065953 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-wg2jr"] Feb 23 10:35:32 crc kubenswrapper[4904]: I0223 10:35:32.079286 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-wg2jr"] Feb 23 10:35:33 crc kubenswrapper[4904]: I0223 10:35:33.288735 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="216f1de0-6e22-49b7-88aa-256d0d67b014" path="/var/lib/kubelet/pods/216f1de0-6e22-49b7-88aa-256d0d67b014/volumes" Feb 23 10:35:37 crc kubenswrapper[4904]: I0223 10:35:37.268816 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:35:37 crc kubenswrapper[4904]: E0223 10:35:37.270034 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:35:40 crc kubenswrapper[4904]: I0223 10:35:40.048821 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-mcbg8"] Feb 23 10:35:40 crc kubenswrapper[4904]: I0223 10:35:40.057622 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-mcbg8"] Feb 23 10:35:41 crc kubenswrapper[4904]: I0223 10:35:41.292772 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f42ae55-89bb-42a1-900c-9c332e089d96" path="/var/lib/kubelet/pods/0f42ae55-89bb-42a1-900c-9c332e089d96/volumes" Feb 23 10:35:44 crc kubenswrapper[4904]: I0223 10:35:44.464629 4904 scope.go:117] "RemoveContainer" containerID="02d57f56a4cfc9e9ac548c8ee7525fda2da032ca2f85c4f23949136388d0ad53" Feb 23 10:35:44 crc kubenswrapper[4904]: I0223 10:35:44.488747 4904 scope.go:117] "RemoveContainer" containerID="4898a9a31fdf05f877c11d0498e089985fbf61fc7c151d9f0e04ec1dcadefac7" Feb 23 10:35:44 crc kubenswrapper[4904]: I0223 10:35:44.547004 4904 scope.go:117] "RemoveContainer" containerID="2dadf099050c5fde95b4ca2898c58548dd27a57c15431d100f50bd18a59c3857" Feb 23 10:35:44 crc kubenswrapper[4904]: I0223 10:35:44.600932 4904 scope.go:117] "RemoveContainer" containerID="9f495fb4de8b047af9ab23d411d271d8fe2175266c7fb7b185559aa0d219b577" Feb 23 10:35:44 crc kubenswrapper[4904]: I0223 10:35:44.641291 4904 scope.go:117] "RemoveContainer" containerID="1ec7d259bf388e2b5a7b0744ee3d3d6c6612588e02e081d22e35bafb9c5fa1ce" Feb 23 10:35:44 crc kubenswrapper[4904]: I0223 10:35:44.668381 4904 scope.go:117] "RemoveContainer" containerID="84aa3f6606ea41a68aa8473b22009f99bbbdee7d57c5d09f818071bb49dfc892" Feb 23 10:35:48 crc kubenswrapper[4904]: I0223 10:35:48.256472 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:35:48 crc kubenswrapper[4904]: E0223 10:35:48.257645 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:35:50 crc kubenswrapper[4904]: I0223 10:35:50.038988 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-72x5h"] Feb 23 10:35:50 crc kubenswrapper[4904]: I0223 10:35:50.049074 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-72x5h"] Feb 23 10:35:51 crc kubenswrapper[4904]: I0223 10:35:51.276225 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a" path="/var/lib/kubelet/pods/6d6eec3d-2b7f-495b-bfb5-4065f0a7b93a/volumes" Feb 23 10:35:58 crc kubenswrapper[4904]: I0223 10:35:58.030445 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-764vx"] Feb 23 10:35:58 crc kubenswrapper[4904]: I0223 10:35:58.038185 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-764vx"] Feb 23 10:35:59 crc kubenswrapper[4904]: I0223 10:35:59.212859 4904 generic.go:334] "Generic (PLEG): container finished" podID="3abf50a2-3000-41eb-97fc-814e2b55cd58" containerID="4cc8d47b00acee9156692f4204cc50e6e203b52e3767ff97f3994b5797c2607e" exitCode=0 Feb 23 10:35:59 crc kubenswrapper[4904]: I0223 10:35:59.212969 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f" event={"ID":"3abf50a2-3000-41eb-97fc-814e2b55cd58","Type":"ContainerDied","Data":"4cc8d47b00acee9156692f4204cc50e6e203b52e3767ff97f3994b5797c2607e"} Feb 23 10:35:59 crc kubenswrapper[4904]: I0223 10:35:59.273340 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e89d0344-56a3-4c17-b647-5d69fc060406" path="/var/lib/kubelet/pods/e89d0344-56a3-4c17-b647-5d69fc060406/volumes" Feb 23 10:36:00 crc kubenswrapper[4904]: I0223 10:36:00.764695 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f" Feb 23 10:36:00 crc kubenswrapper[4904]: I0223 10:36:00.873232 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3abf50a2-3000-41eb-97fc-814e2b55cd58-inventory\") pod \"3abf50a2-3000-41eb-97fc-814e2b55cd58\" (UID: \"3abf50a2-3000-41eb-97fc-814e2b55cd58\") " Feb 23 10:36:00 crc kubenswrapper[4904]: I0223 10:36:00.873307 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3abf50a2-3000-41eb-97fc-814e2b55cd58-ssh-key-openstack-edpm-ipam\") pod \"3abf50a2-3000-41eb-97fc-814e2b55cd58\" (UID: \"3abf50a2-3000-41eb-97fc-814e2b55cd58\") " Feb 23 10:36:00 crc kubenswrapper[4904]: I0223 10:36:00.873533 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tlh5\" (UniqueName: \"kubernetes.io/projected/3abf50a2-3000-41eb-97fc-814e2b55cd58-kube-api-access-8tlh5\") pod \"3abf50a2-3000-41eb-97fc-814e2b55cd58\" (UID: \"3abf50a2-3000-41eb-97fc-814e2b55cd58\") " Feb 23 10:36:00 crc kubenswrapper[4904]: I0223 10:36:00.903162 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3abf50a2-3000-41eb-97fc-814e2b55cd58-kube-api-access-8tlh5" (OuterVolumeSpecName: "kube-api-access-8tlh5") pod "3abf50a2-3000-41eb-97fc-814e2b55cd58" (UID: "3abf50a2-3000-41eb-97fc-814e2b55cd58"). InnerVolumeSpecName "kube-api-access-8tlh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:36:00 crc kubenswrapper[4904]: I0223 10:36:00.912033 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3abf50a2-3000-41eb-97fc-814e2b55cd58-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3abf50a2-3000-41eb-97fc-814e2b55cd58" (UID: "3abf50a2-3000-41eb-97fc-814e2b55cd58"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:36:00 crc kubenswrapper[4904]: I0223 10:36:00.968595 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3abf50a2-3000-41eb-97fc-814e2b55cd58-inventory" (OuterVolumeSpecName: "inventory") pod "3abf50a2-3000-41eb-97fc-814e2b55cd58" (UID: "3abf50a2-3000-41eb-97fc-814e2b55cd58"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:36:00 crc kubenswrapper[4904]: I0223 10:36:00.976420 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3abf50a2-3000-41eb-97fc-814e2b55cd58-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 10:36:00 crc kubenswrapper[4904]: I0223 10:36:00.976777 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3abf50a2-3000-41eb-97fc-814e2b55cd58-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 10:36:00 crc kubenswrapper[4904]: I0223 10:36:00.976802 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tlh5\" (UniqueName: \"kubernetes.io/projected/3abf50a2-3000-41eb-97fc-814e2b55cd58-kube-api-access-8tlh5\") on node \"crc\" DevicePath \"\"" Feb 23 10:36:01 crc kubenswrapper[4904]: I0223 10:36:01.236201 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f" event={"ID":"3abf50a2-3000-41eb-97fc-814e2b55cd58","Type":"ContainerDied","Data":"334c13eba527fd12b6a12a15c16e64736415b475fd26f6352bd823b3b6740f47"} Feb 23 10:36:01 crc kubenswrapper[4904]: I0223 10:36:01.236573 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="334c13eba527fd12b6a12a15c16e64736415b475fd26f6352bd823b3b6740f47" Feb 23 10:36:01 crc kubenswrapper[4904]: I0223 10:36:01.236254 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f" Feb 23 10:36:01 crc kubenswrapper[4904]: I0223 10:36:01.344696 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg"] Feb 23 10:36:01 crc kubenswrapper[4904]: E0223 10:36:01.345137 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3abf50a2-3000-41eb-97fc-814e2b55cd58" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 23 10:36:01 crc kubenswrapper[4904]: I0223 10:36:01.345156 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="3abf50a2-3000-41eb-97fc-814e2b55cd58" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 23 10:36:01 crc kubenswrapper[4904]: I0223 10:36:01.345330 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="3abf50a2-3000-41eb-97fc-814e2b55cd58" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 23 10:36:01 crc kubenswrapper[4904]: I0223 10:36:01.346067 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg" Feb 23 10:36:01 crc kubenswrapper[4904]: I0223 10:36:01.364555 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 10:36:01 crc kubenswrapper[4904]: I0223 10:36:01.364551 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-c72bm" Feb 23 10:36:01 crc kubenswrapper[4904]: I0223 10:36:01.364594 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 10:36:01 crc kubenswrapper[4904]: I0223 10:36:01.364839 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 10:36:01 crc kubenswrapper[4904]: I0223 10:36:01.368848 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg"] Feb 23 10:36:01 crc kubenswrapper[4904]: I0223 10:36:01.489062 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c06f842-3a7c-47c7-a8c7-6738d29cdef7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg\" (UID: \"2c06f842-3a7c-47c7-a8c7-6738d29cdef7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg" Feb 23 10:36:01 crc kubenswrapper[4904]: I0223 10:36:01.489141 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c06f842-3a7c-47c7-a8c7-6738d29cdef7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg\" (UID: \"2c06f842-3a7c-47c7-a8c7-6738d29cdef7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg" Feb 23 10:36:01 crc kubenswrapper[4904]: I0223 10:36:01.489319 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrs8l\" (UniqueName: \"kubernetes.io/projected/2c06f842-3a7c-47c7-a8c7-6738d29cdef7-kube-api-access-hrs8l\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg\" (UID: \"2c06f842-3a7c-47c7-a8c7-6738d29cdef7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg" Feb 23 10:36:01 crc kubenswrapper[4904]: I0223 10:36:01.591870 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrs8l\" (UniqueName: \"kubernetes.io/projected/2c06f842-3a7c-47c7-a8c7-6738d29cdef7-kube-api-access-hrs8l\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg\" (UID: \"2c06f842-3a7c-47c7-a8c7-6738d29cdef7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg" Feb 23 10:36:01 crc kubenswrapper[4904]: I0223 10:36:01.592210 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c06f842-3a7c-47c7-a8c7-6738d29cdef7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg\" (UID: \"2c06f842-3a7c-47c7-a8c7-6738d29cdef7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg" Feb 23 10:36:01 crc kubenswrapper[4904]: I0223 10:36:01.592262 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c06f842-3a7c-47c7-a8c7-6738d29cdef7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg\" (UID: \"2c06f842-3a7c-47c7-a8c7-6738d29cdef7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg" Feb 23 10:36:01 crc kubenswrapper[4904]: I0223 10:36:01.597704 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c06f842-3a7c-47c7-a8c7-6738d29cdef7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg\" (UID: \"2c06f842-3a7c-47c7-a8c7-6738d29cdef7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg" Feb 23 10:36:01 crc kubenswrapper[4904]: I0223 10:36:01.597753 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c06f842-3a7c-47c7-a8c7-6738d29cdef7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg\" (UID: \"2c06f842-3a7c-47c7-a8c7-6738d29cdef7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg" Feb 23 10:36:01 crc kubenswrapper[4904]: I0223 10:36:01.607272 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrs8l\" (UniqueName: \"kubernetes.io/projected/2c06f842-3a7c-47c7-a8c7-6738d29cdef7-kube-api-access-hrs8l\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg\" (UID: \"2c06f842-3a7c-47c7-a8c7-6738d29cdef7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg" Feb 23 10:36:01 crc kubenswrapper[4904]: I0223 10:36:01.668211 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg" Feb 23 10:36:02 crc kubenswrapper[4904]: I0223 10:36:02.281605 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg"] Feb 23 10:36:03 crc kubenswrapper[4904]: I0223 10:36:03.257754 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:36:03 crc kubenswrapper[4904]: E0223 10:36:03.258029 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:36:03 crc kubenswrapper[4904]: I0223 10:36:03.270951 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg" event={"ID":"2c06f842-3a7c-47c7-a8c7-6738d29cdef7","Type":"ContainerStarted","Data":"67dbc0060cc56264f6d9e62cc010d2e9fce9ffc46e76d59b67d7ba0763af9754"} Feb 23 10:36:03 crc kubenswrapper[4904]: I0223 10:36:03.271015 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg" event={"ID":"2c06f842-3a7c-47c7-a8c7-6738d29cdef7","Type":"ContainerStarted","Data":"4b453d72ea0398a2be4de12fa9ac4dfab5f32c2c682e46bb88ee4326d45a698d"} Feb 23 10:36:03 crc kubenswrapper[4904]: I0223 10:36:03.286610 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg" podStartSLOduration=1.7593989840000002 podStartE2EDuration="2.286588791s" podCreationTimestamp="2026-02-23 10:36:01 +0000 UTC" firstStartedPulling="2026-02-23 10:36:02.292781992 +0000 UTC m=+1795.713155505" lastFinishedPulling="2026-02-23 10:36:02.819971769 +0000 UTC m=+1796.240345312" observedRunningTime="2026-02-23 10:36:03.278137011 +0000 UTC m=+1796.698510534" watchObservedRunningTime="2026-02-23 10:36:03.286588791 +0000 UTC m=+1796.706962314" Feb 23 10:36:06 crc kubenswrapper[4904]: I0223 10:36:06.064094 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-8zkpf"] Feb 23 10:36:06 crc kubenswrapper[4904]: I0223 10:36:06.074587 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-8zkpf"] Feb 23 10:36:07 crc kubenswrapper[4904]: I0223 10:36:07.277231 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25955027-6da1-4cce-8074-f079cf65f840" path="/var/lib/kubelet/pods/25955027-6da1-4cce-8074-f079cf65f840/volumes" Feb 23 10:36:08 crc kubenswrapper[4904]: I0223 10:36:08.342941 4904 generic.go:334] "Generic (PLEG): container finished" podID="2c06f842-3a7c-47c7-a8c7-6738d29cdef7" containerID="67dbc0060cc56264f6d9e62cc010d2e9fce9ffc46e76d59b67d7ba0763af9754" exitCode=0 Feb 23 10:36:08 crc kubenswrapper[4904]: I0223 10:36:08.343039 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg" event={"ID":"2c06f842-3a7c-47c7-a8c7-6738d29cdef7","Type":"ContainerDied","Data":"67dbc0060cc56264f6d9e62cc010d2e9fce9ffc46e76d59b67d7ba0763af9754"} Feb 23 10:36:09 crc kubenswrapper[4904]: I0223 10:36:09.911942 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg" Feb 23 10:36:09 crc kubenswrapper[4904]: I0223 10:36:09.963937 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c06f842-3a7c-47c7-a8c7-6738d29cdef7-ssh-key-openstack-edpm-ipam\") pod \"2c06f842-3a7c-47c7-a8c7-6738d29cdef7\" (UID: \"2c06f842-3a7c-47c7-a8c7-6738d29cdef7\") " Feb 23 10:36:09 crc kubenswrapper[4904]: I0223 10:36:09.964167 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrs8l\" (UniqueName: \"kubernetes.io/projected/2c06f842-3a7c-47c7-a8c7-6738d29cdef7-kube-api-access-hrs8l\") pod \"2c06f842-3a7c-47c7-a8c7-6738d29cdef7\" (UID: \"2c06f842-3a7c-47c7-a8c7-6738d29cdef7\") " Feb 23 10:36:09 crc kubenswrapper[4904]: I0223 10:36:09.964268 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c06f842-3a7c-47c7-a8c7-6738d29cdef7-inventory\") pod \"2c06f842-3a7c-47c7-a8c7-6738d29cdef7\" (UID: \"2c06f842-3a7c-47c7-a8c7-6738d29cdef7\") " Feb 23 10:36:09 crc kubenswrapper[4904]: I0223 10:36:09.973070 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c06f842-3a7c-47c7-a8c7-6738d29cdef7-kube-api-access-hrs8l" (OuterVolumeSpecName: "kube-api-access-hrs8l") pod "2c06f842-3a7c-47c7-a8c7-6738d29cdef7" (UID: "2c06f842-3a7c-47c7-a8c7-6738d29cdef7"). InnerVolumeSpecName "kube-api-access-hrs8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.006027 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c06f842-3a7c-47c7-a8c7-6738d29cdef7-inventory" (OuterVolumeSpecName: "inventory") pod "2c06f842-3a7c-47c7-a8c7-6738d29cdef7" (UID: "2c06f842-3a7c-47c7-a8c7-6738d29cdef7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.006834 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c06f842-3a7c-47c7-a8c7-6738d29cdef7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2c06f842-3a7c-47c7-a8c7-6738d29cdef7" (UID: "2c06f842-3a7c-47c7-a8c7-6738d29cdef7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.067040 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c06f842-3a7c-47c7-a8c7-6738d29cdef7-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.067302 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c06f842-3a7c-47c7-a8c7-6738d29cdef7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.067378 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrs8l\" (UniqueName: \"kubernetes.io/projected/2c06f842-3a7c-47c7-a8c7-6738d29cdef7-kube-api-access-hrs8l\") on node \"crc\" DevicePath \"\"" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.368458 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg" event={"ID":"2c06f842-3a7c-47c7-a8c7-6738d29cdef7","Type":"ContainerDied","Data":"4b453d72ea0398a2be4de12fa9ac4dfab5f32c2c682e46bb88ee4326d45a698d"} Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.368504 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b453d72ea0398a2be4de12fa9ac4dfab5f32c2c682e46bb88ee4326d45a698d" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.369004 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.469517 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-stq5n"] Feb 23 10:36:10 crc kubenswrapper[4904]: E0223 10:36:10.470109 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c06f842-3a7c-47c7-a8c7-6738d29cdef7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.470131 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c06f842-3a7c-47c7-a8c7-6738d29cdef7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.470439 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c06f842-3a7c-47c7-a8c7-6738d29cdef7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.471342 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-stq5n" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.476098 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.476128 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.476166 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-c72bm" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.476173 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.499901 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-stq5n"] Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.578837 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07a003d0-1511-4391-a569-fa105e6bdf07-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-stq5n\" (UID: \"07a003d0-1511-4391-a569-fa105e6bdf07\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-stq5n" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.578949 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdm2s\" (UniqueName: \"kubernetes.io/projected/07a003d0-1511-4391-a569-fa105e6bdf07-kube-api-access-zdm2s\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-stq5n\" (UID: \"07a003d0-1511-4391-a569-fa105e6bdf07\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-stq5n" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.578983 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07a003d0-1511-4391-a569-fa105e6bdf07-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-stq5n\" (UID: \"07a003d0-1511-4391-a569-fa105e6bdf07\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-stq5n" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.681665 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07a003d0-1511-4391-a569-fa105e6bdf07-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-stq5n\" (UID: \"07a003d0-1511-4391-a569-fa105e6bdf07\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-stq5n" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.681841 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdm2s\" (UniqueName: \"kubernetes.io/projected/07a003d0-1511-4391-a569-fa105e6bdf07-kube-api-access-zdm2s\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-stq5n\" (UID: \"07a003d0-1511-4391-a569-fa105e6bdf07\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-stq5n" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.681881 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07a003d0-1511-4391-a569-fa105e6bdf07-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-stq5n\" (UID: \"07a003d0-1511-4391-a569-fa105e6bdf07\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-stq5n" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.686285 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07a003d0-1511-4391-a569-fa105e6bdf07-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-stq5n\" (UID: \"07a003d0-1511-4391-a569-fa105e6bdf07\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-stq5n" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.689702 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07a003d0-1511-4391-a569-fa105e6bdf07-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-stq5n\" (UID: \"07a003d0-1511-4391-a569-fa105e6bdf07\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-stq5n" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.704472 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdm2s\" (UniqueName: \"kubernetes.io/projected/07a003d0-1511-4391-a569-fa105e6bdf07-kube-api-access-zdm2s\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-stq5n\" (UID: \"07a003d0-1511-4391-a569-fa105e6bdf07\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-stq5n" Feb 23 10:36:10 crc kubenswrapper[4904]: I0223 10:36:10.796641 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-stq5n" Feb 23 10:36:11 crc kubenswrapper[4904]: I0223 10:36:11.434761 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-stq5n"] Feb 23 10:36:11 crc kubenswrapper[4904]: W0223 10:36:11.436975 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07a003d0_1511_4391_a569_fa105e6bdf07.slice/crio-d30219a6b11029b0cfa94cfab0d81d83900a9e868904e3c2614c5a8a85455b71 WatchSource:0}: Error finding container d30219a6b11029b0cfa94cfab0d81d83900a9e868904e3c2614c5a8a85455b71: Status 404 returned error can't find the container with id d30219a6b11029b0cfa94cfab0d81d83900a9e868904e3c2614c5a8a85455b71 Feb 23 10:36:12 crc kubenswrapper[4904]: I0223 10:36:12.397318 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-stq5n" event={"ID":"07a003d0-1511-4391-a569-fa105e6bdf07","Type":"ContainerStarted","Data":"1d8e17df4dd54203dc085b4d613125a585d2faca769ee3a4b0e71f952a1efc96"} Feb 23 10:36:12 crc kubenswrapper[4904]: I0223 10:36:12.398087 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-stq5n" event={"ID":"07a003d0-1511-4391-a569-fa105e6bdf07","Type":"ContainerStarted","Data":"d30219a6b11029b0cfa94cfab0d81d83900a9e868904e3c2614c5a8a85455b71"} Feb 23 10:36:12 crc kubenswrapper[4904]: I0223 10:36:12.428096 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-stq5n" podStartSLOduration=2.000729528 podStartE2EDuration="2.428040691s" podCreationTimestamp="2026-02-23 10:36:10 +0000 UTC" firstStartedPulling="2026-02-23 10:36:11.439341168 +0000 UTC m=+1804.859714701" lastFinishedPulling="2026-02-23 10:36:11.866652311 +0000 UTC m=+1805.287025864" observedRunningTime="2026-02-23 10:36:12.419941351 +0000 UTC m=+1805.840314904" watchObservedRunningTime="2026-02-23 10:36:12.428040691 +0000 UTC m=+1805.848414234" Feb 23 10:36:14 crc kubenswrapper[4904]: I0223 10:36:14.255903 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:36:14 crc kubenswrapper[4904]: E0223 10:36:14.256687 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:36:26 crc kubenswrapper[4904]: I0223 10:36:26.255795 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:36:26 crc kubenswrapper[4904]: E0223 10:36:26.256678 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:36:42 crc kubenswrapper[4904]: I0223 10:36:42.256547 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:36:42 crc kubenswrapper[4904]: E0223 10:36:42.257628 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:36:44 crc kubenswrapper[4904]: I0223 10:36:44.848161 4904 scope.go:117] "RemoveContainer" containerID="6ebc72530255bb9ec0fe9b9ad8987895b784ddbc5daf3ec34f18062d67b01d6a" Feb 23 10:36:44 crc kubenswrapper[4904]: I0223 10:36:44.916105 4904 scope.go:117] "RemoveContainer" containerID="81e26ee0fa0c94165ab650892fdd472a4e230a32ad7e8da177b4f31df9544d28" Feb 23 10:36:44 crc kubenswrapper[4904]: I0223 10:36:44.969503 4904 scope.go:117] "RemoveContainer" containerID="07b264153c8230e4f64d0b18984c01b2465c471155f3cc715ab5ecfa5f5c8af9" Feb 23 10:36:52 crc kubenswrapper[4904]: I0223 10:36:52.065787 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-eeda-account-create-update-4g7sc"] Feb 23 10:36:52 crc kubenswrapper[4904]: I0223 10:36:52.084741 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-7bvr2"] Feb 23 10:36:52 crc kubenswrapper[4904]: I0223 10:36:52.096037 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-eeda-account-create-update-4g7sc"] Feb 23 10:36:52 crc kubenswrapper[4904]: I0223 10:36:52.105986 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-7bvr2"] Feb 23 10:36:52 crc kubenswrapper[4904]: I0223 10:36:52.887247 4904 generic.go:334] "Generic (PLEG): container finished" podID="07a003d0-1511-4391-a569-fa105e6bdf07" containerID="1d8e17df4dd54203dc085b4d613125a585d2faca769ee3a4b0e71f952a1efc96" exitCode=0 Feb 23 10:36:52 crc kubenswrapper[4904]: I0223 10:36:52.887355 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-stq5n" event={"ID":"07a003d0-1511-4391-a569-fa105e6bdf07","Type":"ContainerDied","Data":"1d8e17df4dd54203dc085b4d613125a585d2faca769ee3a4b0e71f952a1efc96"} Feb 23 10:36:53 crc kubenswrapper[4904]: I0223 10:36:53.049223 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-8vhcb"] Feb 23 10:36:53 crc kubenswrapper[4904]: I0223 10:36:53.062240 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-8f5a-account-create-update-q5hlx"] Feb 23 10:36:53 crc kubenswrapper[4904]: I0223 10:36:53.075316 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-xgx5c"] Feb 23 10:36:53 crc kubenswrapper[4904]: I0223 10:36:53.097664 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-8vhcb"] Feb 23 10:36:53 crc kubenswrapper[4904]: I0223 10:36:53.097750 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-8f5a-account-create-update-q5hlx"] Feb 23 10:36:53 crc kubenswrapper[4904]: I0223 10:36:53.103526 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ab80-account-create-update-j58px"] Feb 23 10:36:53 crc kubenswrapper[4904]: I0223 10:36:53.111018 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-xgx5c"] Feb 23 10:36:53 crc kubenswrapper[4904]: I0223 10:36:53.117375 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ab80-account-create-update-j58px"] Feb 23 10:36:53 crc kubenswrapper[4904]: I0223 10:36:53.276522 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0650495a-8169-4d67-b016-f52cb76911b8" path="/var/lib/kubelet/pods/0650495a-8169-4d67-b016-f52cb76911b8/volumes" Feb 23 10:36:53 crc kubenswrapper[4904]: I0223 10:36:53.278135 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28443614-6612-4fc3-9043-782b2175ddb3" path="/var/lib/kubelet/pods/28443614-6612-4fc3-9043-782b2175ddb3/volumes" Feb 23 10:36:53 crc kubenswrapper[4904]: I0223 10:36:53.279652 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49f85bef-7ffe-4784-8ba2-3bf8c6762e17" path="/var/lib/kubelet/pods/49f85bef-7ffe-4784-8ba2-3bf8c6762e17/volumes" Feb 23 10:36:53 crc kubenswrapper[4904]: I0223 10:36:53.281231 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c368d56-bebc-433e-8519-9b8ab1ef51a4" path="/var/lib/kubelet/pods/4c368d56-bebc-433e-8519-9b8ab1ef51a4/volumes" Feb 23 10:36:53 crc kubenswrapper[4904]: I0223 10:36:53.283644 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="882ac903-0c5a-48c6-977d-645792363692" path="/var/lib/kubelet/pods/882ac903-0c5a-48c6-977d-645792363692/volumes" Feb 23 10:36:53 crc kubenswrapper[4904]: I0223 10:36:53.285088 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca88f3bb-29c1-41ef-8355-f9c52d62438a" path="/var/lib/kubelet/pods/ca88f3bb-29c1-41ef-8355-f9c52d62438a/volumes" Feb 23 10:36:54 crc kubenswrapper[4904]: I0223 10:36:54.398777 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-stq5n" Feb 23 10:36:54 crc kubenswrapper[4904]: I0223 10:36:54.493102 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdm2s\" (UniqueName: \"kubernetes.io/projected/07a003d0-1511-4391-a569-fa105e6bdf07-kube-api-access-zdm2s\") pod \"07a003d0-1511-4391-a569-fa105e6bdf07\" (UID: \"07a003d0-1511-4391-a569-fa105e6bdf07\") " Feb 23 10:36:54 crc kubenswrapper[4904]: I0223 10:36:54.493229 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07a003d0-1511-4391-a569-fa105e6bdf07-inventory\") pod \"07a003d0-1511-4391-a569-fa105e6bdf07\" (UID: \"07a003d0-1511-4391-a569-fa105e6bdf07\") " Feb 23 10:36:54 crc kubenswrapper[4904]: I0223 10:36:54.493502 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07a003d0-1511-4391-a569-fa105e6bdf07-ssh-key-openstack-edpm-ipam\") pod \"07a003d0-1511-4391-a569-fa105e6bdf07\" (UID: \"07a003d0-1511-4391-a569-fa105e6bdf07\") " Feb 23 10:36:54 crc kubenswrapper[4904]: I0223 10:36:54.501156 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07a003d0-1511-4391-a569-fa105e6bdf07-kube-api-access-zdm2s" (OuterVolumeSpecName: "kube-api-access-zdm2s") pod "07a003d0-1511-4391-a569-fa105e6bdf07" (UID: "07a003d0-1511-4391-a569-fa105e6bdf07"). InnerVolumeSpecName "kube-api-access-zdm2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:36:54 crc kubenswrapper[4904]: I0223 10:36:54.525110 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07a003d0-1511-4391-a569-fa105e6bdf07-inventory" (OuterVolumeSpecName: "inventory") pod "07a003d0-1511-4391-a569-fa105e6bdf07" (UID: "07a003d0-1511-4391-a569-fa105e6bdf07"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:36:54 crc kubenswrapper[4904]: I0223 10:36:54.535536 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07a003d0-1511-4391-a569-fa105e6bdf07-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "07a003d0-1511-4391-a569-fa105e6bdf07" (UID: "07a003d0-1511-4391-a569-fa105e6bdf07"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:36:54 crc kubenswrapper[4904]: I0223 10:36:54.596082 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdm2s\" (UniqueName: \"kubernetes.io/projected/07a003d0-1511-4391-a569-fa105e6bdf07-kube-api-access-zdm2s\") on node \"crc\" DevicePath \"\"" Feb 23 10:36:54 crc kubenswrapper[4904]: I0223 10:36:54.596125 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07a003d0-1511-4391-a569-fa105e6bdf07-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 10:36:54 crc kubenswrapper[4904]: I0223 10:36:54.596140 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07a003d0-1511-4391-a569-fa105e6bdf07-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 10:36:54 crc kubenswrapper[4904]: I0223 10:36:54.920170 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-stq5n" event={"ID":"07a003d0-1511-4391-a569-fa105e6bdf07","Type":"ContainerDied","Data":"d30219a6b11029b0cfa94cfab0d81d83900a9e868904e3c2614c5a8a85455b71"} Feb 23 10:36:54 crc kubenswrapper[4904]: I0223 10:36:54.920232 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d30219a6b11029b0cfa94cfab0d81d83900a9e868904e3c2614c5a8a85455b71" Feb 23 10:36:54 crc kubenswrapper[4904]: I0223 10:36:54.920250 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-stq5n" Feb 23 10:36:55 crc kubenswrapper[4904]: I0223 10:36:55.059967 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw"] Feb 23 10:36:55 crc kubenswrapper[4904]: E0223 10:36:55.060505 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a003d0-1511-4391-a569-fa105e6bdf07" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 23 10:36:55 crc kubenswrapper[4904]: I0223 10:36:55.060534 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a003d0-1511-4391-a569-fa105e6bdf07" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 23 10:36:55 crc kubenswrapper[4904]: I0223 10:36:55.060838 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="07a003d0-1511-4391-a569-fa105e6bdf07" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 23 10:36:55 crc kubenswrapper[4904]: I0223 10:36:55.061856 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw" Feb 23 10:36:55 crc kubenswrapper[4904]: I0223 10:36:55.064195 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 10:36:55 crc kubenswrapper[4904]: I0223 10:36:55.064983 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-c72bm" Feb 23 10:36:55 crc kubenswrapper[4904]: I0223 10:36:55.065306 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 10:36:55 crc kubenswrapper[4904]: I0223 10:36:55.065886 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 10:36:55 crc kubenswrapper[4904]: I0223 10:36:55.072201 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw"] Feb 23 10:36:55 crc kubenswrapper[4904]: I0223 10:36:55.109901 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a5d553f-5efd-4ddf-953d-474d747de8f0-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw\" (UID: \"7a5d553f-5efd-4ddf-953d-474d747de8f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw" Feb 23 10:36:55 crc kubenswrapper[4904]: I0223 10:36:55.109998 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rv2b\" (UniqueName: \"kubernetes.io/projected/7a5d553f-5efd-4ddf-953d-474d747de8f0-kube-api-access-9rv2b\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw\" (UID: \"7a5d553f-5efd-4ddf-953d-474d747de8f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw" Feb 23 10:36:55 crc kubenswrapper[4904]: I0223 10:36:55.110239 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a5d553f-5efd-4ddf-953d-474d747de8f0-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw\" (UID: \"7a5d553f-5efd-4ddf-953d-474d747de8f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw" Feb 23 10:36:55 crc kubenswrapper[4904]: I0223 10:36:55.212500 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a5d553f-5efd-4ddf-953d-474d747de8f0-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw\" (UID: \"7a5d553f-5efd-4ddf-953d-474d747de8f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw" Feb 23 10:36:55 crc kubenswrapper[4904]: I0223 10:36:55.212570 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rv2b\" (UniqueName: \"kubernetes.io/projected/7a5d553f-5efd-4ddf-953d-474d747de8f0-kube-api-access-9rv2b\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw\" (UID: \"7a5d553f-5efd-4ddf-953d-474d747de8f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw" Feb 23 10:36:55 crc kubenswrapper[4904]: I0223 10:36:55.212635 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a5d553f-5efd-4ddf-953d-474d747de8f0-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw\" (UID: \"7a5d553f-5efd-4ddf-953d-474d747de8f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw" Feb 23 10:36:55 crc kubenswrapper[4904]: I0223 10:36:55.218512 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a5d553f-5efd-4ddf-953d-474d747de8f0-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw\" (UID: \"7a5d553f-5efd-4ddf-953d-474d747de8f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw" Feb 23 10:36:55 crc kubenswrapper[4904]: I0223 10:36:55.228880 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a5d553f-5efd-4ddf-953d-474d747de8f0-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw\" (UID: \"7a5d553f-5efd-4ddf-953d-474d747de8f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw" Feb 23 10:36:55 crc kubenswrapper[4904]: I0223 10:36:55.256325 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rv2b\" (UniqueName: \"kubernetes.io/projected/7a5d553f-5efd-4ddf-953d-474d747de8f0-kube-api-access-9rv2b\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw\" (UID: \"7a5d553f-5efd-4ddf-953d-474d747de8f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw" Feb 23 10:36:55 crc kubenswrapper[4904]: I0223 10:36:55.386039 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw" Feb 23 10:36:55 crc kubenswrapper[4904]: I0223 10:36:55.893115 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw"] Feb 23 10:36:55 crc kubenswrapper[4904]: I0223 10:36:55.931831 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw" event={"ID":"7a5d553f-5efd-4ddf-953d-474d747de8f0","Type":"ContainerStarted","Data":"d944a98e7451e223a085dbc9bb5f4d886835a9f266122c0b270d78c8db86e895"} Feb 23 10:36:56 crc kubenswrapper[4904]: I0223 10:36:56.255251 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:36:56 crc kubenswrapper[4904]: E0223 10:36:56.255528 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:36:56 crc kubenswrapper[4904]: I0223 10:36:56.946415 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw" event={"ID":"7a5d553f-5efd-4ddf-953d-474d747de8f0","Type":"ContainerStarted","Data":"8c88e64afc03477c6b265e163651e3b6e87f54e884b2bf319540b55fb67bb4c7"} Feb 23 10:36:56 crc kubenswrapper[4904]: I0223 10:36:56.980823 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw" podStartSLOduration=1.586491807 podStartE2EDuration="1.98079147s" podCreationTimestamp="2026-02-23 10:36:55 +0000 UTC" firstStartedPulling="2026-02-23 10:36:55.902305251 +0000 UTC m=+1849.322678784" lastFinishedPulling="2026-02-23 10:36:56.296604914 +0000 UTC m=+1849.716978447" observedRunningTime="2026-02-23 10:36:56.961517371 +0000 UTC m=+1850.381890904" watchObservedRunningTime="2026-02-23 10:36:56.98079147 +0000 UTC m=+1850.401165023" Feb 23 10:37:10 crc kubenswrapper[4904]: I0223 10:37:10.256616 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:37:10 crc kubenswrapper[4904]: E0223 10:37:10.257862 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:37:21 crc kubenswrapper[4904]: I0223 10:37:21.038943 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-thqwm"] Feb 23 10:37:21 crc kubenswrapper[4904]: I0223 10:37:21.047127 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-thqwm"] Feb 23 10:37:21 crc kubenswrapper[4904]: I0223 10:37:21.255081 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:37:21 crc kubenswrapper[4904]: E0223 10:37:21.255475 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:37:21 crc kubenswrapper[4904]: I0223 10:37:21.269687 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12d42a58-0412-45e7-85a3-99a0a16346bc" path="/var/lib/kubelet/pods/12d42a58-0412-45e7-85a3-99a0a16346bc/volumes" Feb 23 10:37:32 crc kubenswrapper[4904]: I0223 10:37:32.256483 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:37:32 crc kubenswrapper[4904]: E0223 10:37:32.257900 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:37:44 crc kubenswrapper[4904]: I0223 10:37:44.256682 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:37:44 crc kubenswrapper[4904]: E0223 10:37:44.257969 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:37:45 crc kubenswrapper[4904]: I0223 10:37:45.072381 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-4cq2r"] Feb 23 10:37:45 crc kubenswrapper[4904]: I0223 10:37:45.084634 4904 scope.go:117] "RemoveContainer" containerID="29e2bb6a8f663b61ff4c1afdd97853563a57cda1a69ff975a010bd7248585506" Feb 23 10:37:45 crc kubenswrapper[4904]: I0223 10:37:45.085934 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mlbcz"] Feb 23 10:37:45 crc kubenswrapper[4904]: I0223 10:37:45.095236 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-4cq2r"] Feb 23 10:37:45 crc kubenswrapper[4904]: I0223 10:37:45.106418 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mlbcz"] Feb 23 10:37:45 crc kubenswrapper[4904]: I0223 10:37:45.128205 4904 scope.go:117] "RemoveContainer" containerID="faca134c21005630ea687c2e78a51f0b353b6469036a4874d7a7be65fb15caf3" Feb 23 10:37:45 crc kubenswrapper[4904]: I0223 10:37:45.226647 4904 scope.go:117] "RemoveContainer" containerID="31359bc5018f2fe1d1eabf2ef89cec97248604ce8de611d76530e49c4778ff0f" Feb 23 10:37:45 crc kubenswrapper[4904]: I0223 10:37:45.272705 4904 scope.go:117] "RemoveContainer" containerID="c72c16fe1fab54fb1580a46f66f4d7bc93cc4a9f54e5a522203dc428b734cc18" Feb 23 10:37:45 crc kubenswrapper[4904]: I0223 10:37:45.273873 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe" path="/var/lib/kubelet/pods/2ce0a6f6-befd-45fe-a0c2-861c6bb3e7fe/volumes" Feb 23 10:37:45 crc kubenswrapper[4904]: I0223 10:37:45.274511 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b8e681f-ebb7-4bdb-b592-2636fab0d8b1" path="/var/lib/kubelet/pods/3b8e681f-ebb7-4bdb-b592-2636fab0d8b1/volumes" Feb 23 10:37:45 crc kubenswrapper[4904]: I0223 10:37:45.338526 4904 scope.go:117] "RemoveContainer" containerID="a7e6dc5b560a690894c4f234267eaaa3e8260db08bc78c71cd6efc69ec379d05" Feb 23 10:37:45 crc kubenswrapper[4904]: I0223 10:37:45.367329 4904 scope.go:117] "RemoveContainer" containerID="f6e1341e604eab099d266f1eee7d0799cd7bd22238a63d2e1793bc32284a8d8b" Feb 23 10:37:45 crc kubenswrapper[4904]: I0223 10:37:45.404283 4904 scope.go:117] "RemoveContainer" containerID="3ca5f15ceacdc0b49538001afaf0255d5b4d7ab4c67657de7daa976c565130e7" Feb 23 10:37:45 crc kubenswrapper[4904]: I0223 10:37:45.433320 4904 scope.go:117] "RemoveContainer" containerID="93f694d8cb626fde538d36935b090b1383eee549aadd2a290312700694cdfa7b" Feb 23 10:37:45 crc kubenswrapper[4904]: I0223 10:37:45.493214 4904 scope.go:117] "RemoveContainer" containerID="ab6d1e25402eaaacefc9793360b9f580e6dcf0e908956ab7d4f44b4187f5251e" Feb 23 10:37:48 crc kubenswrapper[4904]: I0223 10:37:48.614408 4904 generic.go:334] "Generic (PLEG): container finished" podID="7a5d553f-5efd-4ddf-953d-474d747de8f0" containerID="8c88e64afc03477c6b265e163651e3b6e87f54e884b2bf319540b55fb67bb4c7" exitCode=0 Feb 23 10:37:48 crc kubenswrapper[4904]: I0223 10:37:48.614524 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw" event={"ID":"7a5d553f-5efd-4ddf-953d-474d747de8f0","Type":"ContainerDied","Data":"8c88e64afc03477c6b265e163651e3b6e87f54e884b2bf319540b55fb67bb4c7"} Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.233789 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw" Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.393575 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rv2b\" (UniqueName: \"kubernetes.io/projected/7a5d553f-5efd-4ddf-953d-474d747de8f0-kube-api-access-9rv2b\") pod \"7a5d553f-5efd-4ddf-953d-474d747de8f0\" (UID: \"7a5d553f-5efd-4ddf-953d-474d747de8f0\") " Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.394022 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a5d553f-5efd-4ddf-953d-474d747de8f0-ssh-key-openstack-edpm-ipam\") pod \"7a5d553f-5efd-4ddf-953d-474d747de8f0\" (UID: \"7a5d553f-5efd-4ddf-953d-474d747de8f0\") " Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.394084 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a5d553f-5efd-4ddf-953d-474d747de8f0-inventory\") pod \"7a5d553f-5efd-4ddf-953d-474d747de8f0\" (UID: \"7a5d553f-5efd-4ddf-953d-474d747de8f0\") " Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.402520 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a5d553f-5efd-4ddf-953d-474d747de8f0-kube-api-access-9rv2b" (OuterVolumeSpecName: "kube-api-access-9rv2b") pod "7a5d553f-5efd-4ddf-953d-474d747de8f0" (UID: "7a5d553f-5efd-4ddf-953d-474d747de8f0"). InnerVolumeSpecName "kube-api-access-9rv2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.445112 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a5d553f-5efd-4ddf-953d-474d747de8f0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7a5d553f-5efd-4ddf-953d-474d747de8f0" (UID: "7a5d553f-5efd-4ddf-953d-474d747de8f0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.446290 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a5d553f-5efd-4ddf-953d-474d747de8f0-inventory" (OuterVolumeSpecName: "inventory") pod "7a5d553f-5efd-4ddf-953d-474d747de8f0" (UID: "7a5d553f-5efd-4ddf-953d-474d747de8f0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.498649 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a5d553f-5efd-4ddf-953d-474d747de8f0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.498737 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a5d553f-5efd-4ddf-953d-474d747de8f0-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.498758 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rv2b\" (UniqueName: \"kubernetes.io/projected/7a5d553f-5efd-4ddf-953d-474d747de8f0-kube-api-access-9rv2b\") on node \"crc\" DevicePath \"\"" Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.643870 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw" event={"ID":"7a5d553f-5efd-4ddf-953d-474d747de8f0","Type":"ContainerDied","Data":"d944a98e7451e223a085dbc9bb5f4d886835a9f266122c0b270d78c8db86e895"} Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.644341 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d944a98e7451e223a085dbc9bb5f4d886835a9f266122c0b270d78c8db86e895" Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.643976 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw" Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.769961 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-kxtnf"] Feb 23 10:37:50 crc kubenswrapper[4904]: E0223 10:37:50.770609 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a5d553f-5efd-4ddf-953d-474d747de8f0" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.770635 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a5d553f-5efd-4ddf-953d-474d747de8f0" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.771037 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a5d553f-5efd-4ddf-953d-474d747de8f0" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.772192 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-kxtnf" Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.774460 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-c72bm" Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.774965 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.779136 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.779200 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.799201 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-kxtnf"] Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.913968 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvcgb\" (UniqueName: \"kubernetes.io/projected/9c82fdc6-273e-4aaf-80b9-d6c461d0a40b-kube-api-access-wvcgb\") pod \"ssh-known-hosts-edpm-deployment-kxtnf\" (UID: \"9c82fdc6-273e-4aaf-80b9-d6c461d0a40b\") " pod="openstack/ssh-known-hosts-edpm-deployment-kxtnf" Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.914936 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c82fdc6-273e-4aaf-80b9-d6c461d0a40b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-kxtnf\" (UID: \"9c82fdc6-273e-4aaf-80b9-d6c461d0a40b\") " pod="openstack/ssh-known-hosts-edpm-deployment-kxtnf" Feb 23 10:37:50 crc kubenswrapper[4904]: I0223 10:37:50.915048 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9c82fdc6-273e-4aaf-80b9-d6c461d0a40b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-kxtnf\" (UID: \"9c82fdc6-273e-4aaf-80b9-d6c461d0a40b\") " pod="openstack/ssh-known-hosts-edpm-deployment-kxtnf" Feb 23 10:37:51 crc kubenswrapper[4904]: I0223 10:37:51.016911 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvcgb\" (UniqueName: \"kubernetes.io/projected/9c82fdc6-273e-4aaf-80b9-d6c461d0a40b-kube-api-access-wvcgb\") pod \"ssh-known-hosts-edpm-deployment-kxtnf\" (UID: \"9c82fdc6-273e-4aaf-80b9-d6c461d0a40b\") " pod="openstack/ssh-known-hosts-edpm-deployment-kxtnf" Feb 23 10:37:51 crc kubenswrapper[4904]: I0223 10:37:51.017118 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c82fdc6-273e-4aaf-80b9-d6c461d0a40b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-kxtnf\" (UID: \"9c82fdc6-273e-4aaf-80b9-d6c461d0a40b\") " pod="openstack/ssh-known-hosts-edpm-deployment-kxtnf" Feb 23 10:37:51 crc kubenswrapper[4904]: I0223 10:37:51.017152 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9c82fdc6-273e-4aaf-80b9-d6c461d0a40b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-kxtnf\" (UID: \"9c82fdc6-273e-4aaf-80b9-d6c461d0a40b\") " pod="openstack/ssh-known-hosts-edpm-deployment-kxtnf" Feb 23 10:37:51 crc kubenswrapper[4904]: I0223 10:37:51.023405 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9c82fdc6-273e-4aaf-80b9-d6c461d0a40b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-kxtnf\" (UID: \"9c82fdc6-273e-4aaf-80b9-d6c461d0a40b\") " pod="openstack/ssh-known-hosts-edpm-deployment-kxtnf" Feb 23 10:37:51 crc kubenswrapper[4904]: I0223 10:37:51.025555 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c82fdc6-273e-4aaf-80b9-d6c461d0a40b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-kxtnf\" (UID: \"9c82fdc6-273e-4aaf-80b9-d6c461d0a40b\") " pod="openstack/ssh-known-hosts-edpm-deployment-kxtnf" Feb 23 10:37:51 crc kubenswrapper[4904]: I0223 10:37:51.037505 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvcgb\" (UniqueName: \"kubernetes.io/projected/9c82fdc6-273e-4aaf-80b9-d6c461d0a40b-kube-api-access-wvcgb\") pod \"ssh-known-hosts-edpm-deployment-kxtnf\" (UID: \"9c82fdc6-273e-4aaf-80b9-d6c461d0a40b\") " pod="openstack/ssh-known-hosts-edpm-deployment-kxtnf" Feb 23 10:37:51 crc kubenswrapper[4904]: I0223 10:37:51.104872 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-kxtnf" Feb 23 10:37:51 crc kubenswrapper[4904]: I0223 10:37:51.728986 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-kxtnf"] Feb 23 10:37:51 crc kubenswrapper[4904]: I0223 10:37:51.736756 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 10:37:52 crc kubenswrapper[4904]: I0223 10:37:52.670874 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-kxtnf" event={"ID":"9c82fdc6-273e-4aaf-80b9-d6c461d0a40b","Type":"ContainerStarted","Data":"abb10578214eeea02db7e676cbd605efbde93b654667ebf655dbb7663fe04b14"} Feb 23 10:37:52 crc kubenswrapper[4904]: I0223 10:37:52.671216 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-kxtnf" event={"ID":"9c82fdc6-273e-4aaf-80b9-d6c461d0a40b","Type":"ContainerStarted","Data":"1cd58c9386de15fb9043acdf87259740710094316e70860263ad2a334148cd5d"} Feb 23 10:37:52 crc kubenswrapper[4904]: I0223 10:37:52.698583 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-kxtnf" podStartSLOduration=2.203209914 podStartE2EDuration="2.698560597s" podCreationTimestamp="2026-02-23 10:37:50 +0000 UTC" firstStartedPulling="2026-02-23 10:37:51.736328802 +0000 UTC m=+1905.156702355" lastFinishedPulling="2026-02-23 10:37:52.231679485 +0000 UTC m=+1905.652053038" observedRunningTime="2026-02-23 10:37:52.689987313 +0000 UTC m=+1906.110360836" watchObservedRunningTime="2026-02-23 10:37:52.698560597 +0000 UTC m=+1906.118934120" Feb 23 10:37:55 crc kubenswrapper[4904]: I0223 10:37:55.256533 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:37:55 crc kubenswrapper[4904]: I0223 10:37:55.713876 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerStarted","Data":"414846274ece15cca21a2594f0ee6206cc2db384fa114ea705e7276249d406b9"} Feb 23 10:37:59 crc kubenswrapper[4904]: I0223 10:37:59.750954 4904 generic.go:334] "Generic (PLEG): container finished" podID="9c82fdc6-273e-4aaf-80b9-d6c461d0a40b" containerID="abb10578214eeea02db7e676cbd605efbde93b654667ebf655dbb7663fe04b14" exitCode=0 Feb 23 10:37:59 crc kubenswrapper[4904]: I0223 10:37:59.751067 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-kxtnf" event={"ID":"9c82fdc6-273e-4aaf-80b9-d6c461d0a40b","Type":"ContainerDied","Data":"abb10578214eeea02db7e676cbd605efbde93b654667ebf655dbb7663fe04b14"} Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.253037 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-kxtnf" Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.265669 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9c82fdc6-273e-4aaf-80b9-d6c461d0a40b-inventory-0\") pod \"9c82fdc6-273e-4aaf-80b9-d6c461d0a40b\" (UID: \"9c82fdc6-273e-4aaf-80b9-d6c461d0a40b\") " Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.265780 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvcgb\" (UniqueName: \"kubernetes.io/projected/9c82fdc6-273e-4aaf-80b9-d6c461d0a40b-kube-api-access-wvcgb\") pod \"9c82fdc6-273e-4aaf-80b9-d6c461d0a40b\" (UID: \"9c82fdc6-273e-4aaf-80b9-d6c461d0a40b\") " Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.265869 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c82fdc6-273e-4aaf-80b9-d6c461d0a40b-ssh-key-openstack-edpm-ipam\") pod \"9c82fdc6-273e-4aaf-80b9-d6c461d0a40b\" (UID: \"9c82fdc6-273e-4aaf-80b9-d6c461d0a40b\") " Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.273072 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c82fdc6-273e-4aaf-80b9-d6c461d0a40b-kube-api-access-wvcgb" (OuterVolumeSpecName: "kube-api-access-wvcgb") pod "9c82fdc6-273e-4aaf-80b9-d6c461d0a40b" (UID: "9c82fdc6-273e-4aaf-80b9-d6c461d0a40b"). InnerVolumeSpecName "kube-api-access-wvcgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.311487 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c82fdc6-273e-4aaf-80b9-d6c461d0a40b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9c82fdc6-273e-4aaf-80b9-d6c461d0a40b" (UID: "9c82fdc6-273e-4aaf-80b9-d6c461d0a40b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.312297 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c82fdc6-273e-4aaf-80b9-d6c461d0a40b-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "9c82fdc6-273e-4aaf-80b9-d6c461d0a40b" (UID: "9c82fdc6-273e-4aaf-80b9-d6c461d0a40b"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.370913 4904 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9c82fdc6-273e-4aaf-80b9-d6c461d0a40b-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.370967 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvcgb\" (UniqueName: \"kubernetes.io/projected/9c82fdc6-273e-4aaf-80b9-d6c461d0a40b-kube-api-access-wvcgb\") on node \"crc\" DevicePath \"\"" Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.371015 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c82fdc6-273e-4aaf-80b9-d6c461d0a40b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.782102 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-kxtnf" event={"ID":"9c82fdc6-273e-4aaf-80b9-d6c461d0a40b","Type":"ContainerDied","Data":"1cd58c9386de15fb9043acdf87259740710094316e70860263ad2a334148cd5d"} Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.782156 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cd58c9386de15fb9043acdf87259740710094316e70860263ad2a334148cd5d" Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.782204 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-kxtnf" Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.884743 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-tc422"] Feb 23 10:38:01 crc kubenswrapper[4904]: E0223 10:38:01.885256 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c82fdc6-273e-4aaf-80b9-d6c461d0a40b" containerName="ssh-known-hosts-edpm-deployment" Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.885562 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c82fdc6-273e-4aaf-80b9-d6c461d0a40b" containerName="ssh-known-hosts-edpm-deployment" Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.885900 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c82fdc6-273e-4aaf-80b9-d6c461d0a40b" containerName="ssh-known-hosts-edpm-deployment" Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.886698 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tc422" Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.889120 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-c72bm" Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.889439 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.889857 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.890487 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.907838 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-tc422"] Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.987352 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g7k9\" (UniqueName: \"kubernetes.io/projected/24bb648a-839a-4b68-80ae-949a7995921d-kube-api-access-9g7k9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tc422\" (UID: \"24bb648a-839a-4b68-80ae-949a7995921d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tc422" Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.987446 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24bb648a-839a-4b68-80ae-949a7995921d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tc422\" (UID: \"24bb648a-839a-4b68-80ae-949a7995921d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tc422" Feb 23 10:38:01 crc kubenswrapper[4904]: I0223 10:38:01.987812 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24bb648a-839a-4b68-80ae-949a7995921d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tc422\" (UID: \"24bb648a-839a-4b68-80ae-949a7995921d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tc422" Feb 23 10:38:02 crc kubenswrapper[4904]: I0223 10:38:02.089441 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g7k9\" (UniqueName: \"kubernetes.io/projected/24bb648a-839a-4b68-80ae-949a7995921d-kube-api-access-9g7k9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tc422\" (UID: \"24bb648a-839a-4b68-80ae-949a7995921d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tc422" Feb 23 10:38:02 crc kubenswrapper[4904]: I0223 10:38:02.089553 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24bb648a-839a-4b68-80ae-949a7995921d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tc422\" (UID: \"24bb648a-839a-4b68-80ae-949a7995921d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tc422" Feb 23 10:38:02 crc kubenswrapper[4904]: I0223 10:38:02.089637 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24bb648a-839a-4b68-80ae-949a7995921d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tc422\" (UID: \"24bb648a-839a-4b68-80ae-949a7995921d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tc422" Feb 23 10:38:02 crc kubenswrapper[4904]: I0223 10:38:02.095190 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24bb648a-839a-4b68-80ae-949a7995921d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tc422\" (UID: \"24bb648a-839a-4b68-80ae-949a7995921d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tc422" Feb 23 10:38:02 crc kubenswrapper[4904]: I0223 10:38:02.095812 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24bb648a-839a-4b68-80ae-949a7995921d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tc422\" (UID: \"24bb648a-839a-4b68-80ae-949a7995921d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tc422" Feb 23 10:38:02 crc kubenswrapper[4904]: I0223 10:38:02.112058 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g7k9\" (UniqueName: \"kubernetes.io/projected/24bb648a-839a-4b68-80ae-949a7995921d-kube-api-access-9g7k9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tc422\" (UID: \"24bb648a-839a-4b68-80ae-949a7995921d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tc422" Feb 23 10:38:02 crc kubenswrapper[4904]: I0223 10:38:02.223530 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tc422" Feb 23 10:38:02 crc kubenswrapper[4904]: W0223 10:38:02.802211 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24bb648a_839a_4b68_80ae_949a7995921d.slice/crio-af0b05dfa9800de2858ebdc2b0dfb57d0bc3267bd2db9d18d8bb0d511055a58d WatchSource:0}: Error finding container af0b05dfa9800de2858ebdc2b0dfb57d0bc3267bd2db9d18d8bb0d511055a58d: Status 404 returned error can't find the container with id af0b05dfa9800de2858ebdc2b0dfb57d0bc3267bd2db9d18d8bb0d511055a58d Feb 23 10:38:02 crc kubenswrapper[4904]: I0223 10:38:02.803039 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-tc422"] Feb 23 10:38:03 crc kubenswrapper[4904]: I0223 10:38:03.803677 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tc422" event={"ID":"24bb648a-839a-4b68-80ae-949a7995921d","Type":"ContainerStarted","Data":"51dc637e5af5dfc039e9e8c1d3f7a0269c3777b0c3f2a31567c94b469800954d"} Feb 23 10:38:03 crc kubenswrapper[4904]: I0223 10:38:03.804403 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tc422" event={"ID":"24bb648a-839a-4b68-80ae-949a7995921d","Type":"ContainerStarted","Data":"af0b05dfa9800de2858ebdc2b0dfb57d0bc3267bd2db9d18d8bb0d511055a58d"} Feb 23 10:38:03 crc kubenswrapper[4904]: I0223 10:38:03.830286 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tc422" podStartSLOduration=2.349136823 podStartE2EDuration="2.830267621s" podCreationTimestamp="2026-02-23 10:38:01 +0000 UTC" firstStartedPulling="2026-02-23 10:38:02.804460095 +0000 UTC m=+1916.224833628" lastFinishedPulling="2026-02-23 10:38:03.285590893 +0000 UTC m=+1916.705964426" observedRunningTime="2026-02-23 10:38:03.823028695 +0000 UTC m=+1917.243402208" watchObservedRunningTime="2026-02-23 10:38:03.830267621 +0000 UTC m=+1917.250641124" Feb 23 10:38:11 crc kubenswrapper[4904]: I0223 10:38:11.901967 4904 generic.go:334] "Generic (PLEG): container finished" podID="24bb648a-839a-4b68-80ae-949a7995921d" containerID="51dc637e5af5dfc039e9e8c1d3f7a0269c3777b0c3f2a31567c94b469800954d" exitCode=0 Feb 23 10:38:11 crc kubenswrapper[4904]: I0223 10:38:11.902163 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tc422" event={"ID":"24bb648a-839a-4b68-80ae-949a7995921d","Type":"ContainerDied","Data":"51dc637e5af5dfc039e9e8c1d3f7a0269c3777b0c3f2a31567c94b469800954d"} Feb 23 10:38:13 crc kubenswrapper[4904]: I0223 10:38:13.441271 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tc422" Feb 23 10:38:13 crc kubenswrapper[4904]: I0223 10:38:13.576261 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24bb648a-839a-4b68-80ae-949a7995921d-ssh-key-openstack-edpm-ipam\") pod \"24bb648a-839a-4b68-80ae-949a7995921d\" (UID: \"24bb648a-839a-4b68-80ae-949a7995921d\") " Feb 23 10:38:13 crc kubenswrapper[4904]: I0223 10:38:13.576385 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24bb648a-839a-4b68-80ae-949a7995921d-inventory\") pod \"24bb648a-839a-4b68-80ae-949a7995921d\" (UID: \"24bb648a-839a-4b68-80ae-949a7995921d\") " Feb 23 10:38:13 crc kubenswrapper[4904]: I0223 10:38:13.576853 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g7k9\" (UniqueName: \"kubernetes.io/projected/24bb648a-839a-4b68-80ae-949a7995921d-kube-api-access-9g7k9\") pod \"24bb648a-839a-4b68-80ae-949a7995921d\" (UID: \"24bb648a-839a-4b68-80ae-949a7995921d\") " Feb 23 10:38:13 crc kubenswrapper[4904]: I0223 10:38:13.584580 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24bb648a-839a-4b68-80ae-949a7995921d-kube-api-access-9g7k9" (OuterVolumeSpecName: "kube-api-access-9g7k9") pod "24bb648a-839a-4b68-80ae-949a7995921d" (UID: "24bb648a-839a-4b68-80ae-949a7995921d"). InnerVolumeSpecName "kube-api-access-9g7k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:38:13 crc kubenswrapper[4904]: I0223 10:38:13.625126 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24bb648a-839a-4b68-80ae-949a7995921d-inventory" (OuterVolumeSpecName: "inventory") pod "24bb648a-839a-4b68-80ae-949a7995921d" (UID: "24bb648a-839a-4b68-80ae-949a7995921d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:38:13 crc kubenswrapper[4904]: I0223 10:38:13.638215 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24bb648a-839a-4b68-80ae-949a7995921d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "24bb648a-839a-4b68-80ae-949a7995921d" (UID: "24bb648a-839a-4b68-80ae-949a7995921d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:38:13 crc kubenswrapper[4904]: I0223 10:38:13.680209 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g7k9\" (UniqueName: \"kubernetes.io/projected/24bb648a-839a-4b68-80ae-949a7995921d-kube-api-access-9g7k9\") on node \"crc\" DevicePath \"\"" Feb 23 10:38:13 crc kubenswrapper[4904]: I0223 10:38:13.680265 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24bb648a-839a-4b68-80ae-949a7995921d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 10:38:13 crc kubenswrapper[4904]: I0223 10:38:13.680286 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24bb648a-839a-4b68-80ae-949a7995921d-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 10:38:13 crc kubenswrapper[4904]: I0223 10:38:13.933017 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tc422" event={"ID":"24bb648a-839a-4b68-80ae-949a7995921d","Type":"ContainerDied","Data":"af0b05dfa9800de2858ebdc2b0dfb57d0bc3267bd2db9d18d8bb0d511055a58d"} Feb 23 10:38:13 crc kubenswrapper[4904]: I0223 10:38:13.933389 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af0b05dfa9800de2858ebdc2b0dfb57d0bc3267bd2db9d18d8bb0d511055a58d" Feb 23 10:38:13 crc kubenswrapper[4904]: I0223 10:38:13.933156 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tc422" Feb 23 10:38:14 crc kubenswrapper[4904]: I0223 10:38:14.067772 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq"] Feb 23 10:38:14 crc kubenswrapper[4904]: E0223 10:38:14.068407 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24bb648a-839a-4b68-80ae-949a7995921d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 23 10:38:14 crc kubenswrapper[4904]: I0223 10:38:14.068523 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="24bb648a-839a-4b68-80ae-949a7995921d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 23 10:38:14 crc kubenswrapper[4904]: I0223 10:38:14.068841 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="24bb648a-839a-4b68-80ae-949a7995921d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 23 10:38:14 crc kubenswrapper[4904]: I0223 10:38:14.070985 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq" Feb 23 10:38:14 crc kubenswrapper[4904]: I0223 10:38:14.073566 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-c72bm" Feb 23 10:38:14 crc kubenswrapper[4904]: I0223 10:38:14.074231 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 10:38:14 crc kubenswrapper[4904]: I0223 10:38:14.075062 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 10:38:14 crc kubenswrapper[4904]: I0223 10:38:14.075284 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 10:38:14 crc kubenswrapper[4904]: I0223 10:38:14.091001 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq"] Feb 23 10:38:14 crc kubenswrapper[4904]: I0223 10:38:14.194199 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g69nb\" (UniqueName: \"kubernetes.io/projected/7fb77479-c222-4b0b-92fa-7405cae94fd9-kube-api-access-g69nb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq\" (UID: \"7fb77479-c222-4b0b-92fa-7405cae94fd9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq" Feb 23 10:38:14 crc kubenswrapper[4904]: I0223 10:38:14.194290 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fb77479-c222-4b0b-92fa-7405cae94fd9-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq\" (UID: \"7fb77479-c222-4b0b-92fa-7405cae94fd9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq" Feb 23 10:38:14 crc kubenswrapper[4904]: I0223 10:38:14.194342 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fb77479-c222-4b0b-92fa-7405cae94fd9-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq\" (UID: \"7fb77479-c222-4b0b-92fa-7405cae94fd9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq" Feb 23 10:38:14 crc kubenswrapper[4904]: I0223 10:38:14.296010 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fb77479-c222-4b0b-92fa-7405cae94fd9-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq\" (UID: \"7fb77479-c222-4b0b-92fa-7405cae94fd9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq" Feb 23 10:38:14 crc kubenswrapper[4904]: I0223 10:38:14.296357 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g69nb\" (UniqueName: \"kubernetes.io/projected/7fb77479-c222-4b0b-92fa-7405cae94fd9-kube-api-access-g69nb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq\" (UID: \"7fb77479-c222-4b0b-92fa-7405cae94fd9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq" Feb 23 10:38:14 crc kubenswrapper[4904]: I0223 10:38:14.296437 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fb77479-c222-4b0b-92fa-7405cae94fd9-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq\" (UID: \"7fb77479-c222-4b0b-92fa-7405cae94fd9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq" Feb 23 10:38:14 crc kubenswrapper[4904]: I0223 10:38:14.303381 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fb77479-c222-4b0b-92fa-7405cae94fd9-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq\" (UID: \"7fb77479-c222-4b0b-92fa-7405cae94fd9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq" Feb 23 10:38:14 crc kubenswrapper[4904]: I0223 10:38:14.303433 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fb77479-c222-4b0b-92fa-7405cae94fd9-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq\" (UID: \"7fb77479-c222-4b0b-92fa-7405cae94fd9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq" Feb 23 10:38:14 crc kubenswrapper[4904]: I0223 10:38:14.330247 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g69nb\" (UniqueName: \"kubernetes.io/projected/7fb77479-c222-4b0b-92fa-7405cae94fd9-kube-api-access-g69nb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq\" (UID: \"7fb77479-c222-4b0b-92fa-7405cae94fd9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq" Feb 23 10:38:14 crc kubenswrapper[4904]: I0223 10:38:14.424415 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq" Feb 23 10:38:14 crc kubenswrapper[4904]: I0223 10:38:14.846200 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq"] Feb 23 10:38:14 crc kubenswrapper[4904]: I0223 10:38:14.944978 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq" event={"ID":"7fb77479-c222-4b0b-92fa-7405cae94fd9","Type":"ContainerStarted","Data":"90161a8940431cf995359de2f7c706fc80bfc65e65bdf636dd131f17eb8a7701"} Feb 23 10:38:15 crc kubenswrapper[4904]: I0223 10:38:15.957078 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq" event={"ID":"7fb77479-c222-4b0b-92fa-7405cae94fd9","Type":"ContainerStarted","Data":"9b5f849c91b4b650f70c7ac23c00953b511751d66fd4c33388d8f070d9914fdc"} Feb 23 10:38:15 crc kubenswrapper[4904]: I0223 10:38:15.979365 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq" podStartSLOduration=1.566790622 podStartE2EDuration="1.979342898s" podCreationTimestamp="2026-02-23 10:38:14 +0000 UTC" firstStartedPulling="2026-02-23 10:38:14.853084804 +0000 UTC m=+1928.273458327" lastFinishedPulling="2026-02-23 10:38:15.26563706 +0000 UTC m=+1928.686010603" observedRunningTime="2026-02-23 10:38:15.976970421 +0000 UTC m=+1929.397343974" watchObservedRunningTime="2026-02-23 10:38:15.979342898 +0000 UTC m=+1929.399716431" Feb 23 10:38:26 crc kubenswrapper[4904]: I0223 10:38:26.065390 4904 generic.go:334] "Generic (PLEG): container finished" podID="7fb77479-c222-4b0b-92fa-7405cae94fd9" containerID="9b5f849c91b4b650f70c7ac23c00953b511751d66fd4c33388d8f070d9914fdc" exitCode=0 Feb 23 10:38:26 crc kubenswrapper[4904]: I0223 10:38:26.065464 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq" event={"ID":"7fb77479-c222-4b0b-92fa-7405cae94fd9","Type":"ContainerDied","Data":"9b5f849c91b4b650f70c7ac23c00953b511751d66fd4c33388d8f070d9914fdc"} Feb 23 10:38:27 crc kubenswrapper[4904]: I0223 10:38:27.631376 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq" Feb 23 10:38:27 crc kubenswrapper[4904]: I0223 10:38:27.676151 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g69nb\" (UniqueName: \"kubernetes.io/projected/7fb77479-c222-4b0b-92fa-7405cae94fd9-kube-api-access-g69nb\") pod \"7fb77479-c222-4b0b-92fa-7405cae94fd9\" (UID: \"7fb77479-c222-4b0b-92fa-7405cae94fd9\") " Feb 23 10:38:27 crc kubenswrapper[4904]: I0223 10:38:27.676401 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fb77479-c222-4b0b-92fa-7405cae94fd9-ssh-key-openstack-edpm-ipam\") pod \"7fb77479-c222-4b0b-92fa-7405cae94fd9\" (UID: \"7fb77479-c222-4b0b-92fa-7405cae94fd9\") " Feb 23 10:38:27 crc kubenswrapper[4904]: I0223 10:38:27.676577 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fb77479-c222-4b0b-92fa-7405cae94fd9-inventory\") pod \"7fb77479-c222-4b0b-92fa-7405cae94fd9\" (UID: \"7fb77479-c222-4b0b-92fa-7405cae94fd9\") " Feb 23 10:38:27 crc kubenswrapper[4904]: I0223 10:38:27.692957 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fb77479-c222-4b0b-92fa-7405cae94fd9-kube-api-access-g69nb" (OuterVolumeSpecName: "kube-api-access-g69nb") pod "7fb77479-c222-4b0b-92fa-7405cae94fd9" (UID: "7fb77479-c222-4b0b-92fa-7405cae94fd9"). InnerVolumeSpecName "kube-api-access-g69nb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:38:27 crc kubenswrapper[4904]: I0223 10:38:27.722318 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fb77479-c222-4b0b-92fa-7405cae94fd9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7fb77479-c222-4b0b-92fa-7405cae94fd9" (UID: "7fb77479-c222-4b0b-92fa-7405cae94fd9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:38:27 crc kubenswrapper[4904]: I0223 10:38:27.728689 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fb77479-c222-4b0b-92fa-7405cae94fd9-inventory" (OuterVolumeSpecName: "inventory") pod "7fb77479-c222-4b0b-92fa-7405cae94fd9" (UID: "7fb77479-c222-4b0b-92fa-7405cae94fd9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:38:27 crc kubenswrapper[4904]: I0223 10:38:27.778796 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g69nb\" (UniqueName: \"kubernetes.io/projected/7fb77479-c222-4b0b-92fa-7405cae94fd9-kube-api-access-g69nb\") on node \"crc\" DevicePath \"\"" Feb 23 10:38:27 crc kubenswrapper[4904]: I0223 10:38:27.778833 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fb77479-c222-4b0b-92fa-7405cae94fd9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 10:38:27 crc kubenswrapper[4904]: I0223 10:38:27.778847 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fb77479-c222-4b0b-92fa-7405cae94fd9-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.094643 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq" event={"ID":"7fb77479-c222-4b0b-92fa-7405cae94fd9","Type":"ContainerDied","Data":"90161a8940431cf995359de2f7c706fc80bfc65e65bdf636dd131f17eb8a7701"} Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.094690 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90161a8940431cf995359de2f7c706fc80bfc65e65bdf636dd131f17eb8a7701" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.094697 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.196927 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck"] Feb 23 10:38:28 crc kubenswrapper[4904]: E0223 10:38:28.197413 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb77479-c222-4b0b-92fa-7405cae94fd9" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.197436 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb77479-c222-4b0b-92fa-7405cae94fd9" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.197742 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fb77479-c222-4b0b-92fa-7405cae94fd9" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.198557 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.201776 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.201818 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.201842 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.202371 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-c72bm" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.202408 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.202595 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.202602 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.202854 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.237364 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck"] Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.289490 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.289780 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.289964 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.290083 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.290278 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.290874 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.291022 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.291156 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.291292 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.291783 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.291904 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlpff\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-kube-api-access-tlpff\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.292008 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.292118 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.292273 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.394494 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.394644 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.394698 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.394863 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.394921 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.394981 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.395025 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.395064 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.395099 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.395138 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlpff\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-kube-api-access-tlpff\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.395198 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.395255 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.395316 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.395370 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.399767 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.399853 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.399975 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.400840 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.401647 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.403174 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.403504 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.403704 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.405072 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.405310 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.405437 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.413261 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.414554 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlpff\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-kube-api-access-tlpff\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.419706 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d26ck\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:28 crc kubenswrapper[4904]: I0223 10:38:28.531930 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:38:29 crc kubenswrapper[4904]: I0223 10:38:29.155924 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck"] Feb 23 10:38:30 crc kubenswrapper[4904]: I0223 10:38:30.073951 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-6q59t"] Feb 23 10:38:30 crc kubenswrapper[4904]: I0223 10:38:30.088150 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-6q59t"] Feb 23 10:38:30 crc kubenswrapper[4904]: I0223 10:38:30.121588 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" event={"ID":"4df029b4-8135-45ce-a861-bd06d35ee0ab","Type":"ContainerStarted","Data":"e22e14a3896502e7074ba178359c0ebb0075e8143ebbff20e76e85d373ee229a"} Feb 23 10:38:30 crc kubenswrapper[4904]: I0223 10:38:30.121676 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" event={"ID":"4df029b4-8135-45ce-a861-bd06d35ee0ab","Type":"ContainerStarted","Data":"cbe1ca010f6b61a78af9b7ee4f65885d1bf50fed731aee20360ff0d5e7f23d89"} Feb 23 10:38:30 crc kubenswrapper[4904]: I0223 10:38:30.161233 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" podStartSLOduration=1.741166812 podStartE2EDuration="2.161208251s" podCreationTimestamp="2026-02-23 10:38:28 +0000 UTC" firstStartedPulling="2026-02-23 10:38:29.164961537 +0000 UTC m=+1942.585335060" lastFinishedPulling="2026-02-23 10:38:29.585002996 +0000 UTC m=+1943.005376499" observedRunningTime="2026-02-23 10:38:30.160014987 +0000 UTC m=+1943.580388540" watchObservedRunningTime="2026-02-23 10:38:30.161208251 +0000 UTC m=+1943.581581804" Feb 23 10:38:31 crc kubenswrapper[4904]: I0223 10:38:31.273961 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3de5ba4e-08ff-4c74-b42c-037d9f6b8d58" path="/var/lib/kubelet/pods/3de5ba4e-08ff-4c74-b42c-037d9f6b8d58/volumes" Feb 23 10:38:45 crc kubenswrapper[4904]: I0223 10:38:45.739823 4904 scope.go:117] "RemoveContainer" containerID="bccdcb450f4f692b42e8199cdc9e2bc972283ec25a69293ce48f39063a515c86" Feb 23 10:39:10 crc kubenswrapper[4904]: I0223 10:39:10.586108 4904 generic.go:334] "Generic (PLEG): container finished" podID="4df029b4-8135-45ce-a861-bd06d35ee0ab" containerID="e22e14a3896502e7074ba178359c0ebb0075e8143ebbff20e76e85d373ee229a" exitCode=0 Feb 23 10:39:10 crc kubenswrapper[4904]: I0223 10:39:10.586704 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" event={"ID":"4df029b4-8135-45ce-a861-bd06d35ee0ab","Type":"ContainerDied","Data":"e22e14a3896502e7074ba178359c0ebb0075e8143ebbff20e76e85d373ee229a"} Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.069660 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.169749 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-bootstrap-combined-ca-bundle\") pod \"4df029b4-8135-45ce-a861-bd06d35ee0ab\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.170114 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-telemetry-combined-ca-bundle\") pod \"4df029b4-8135-45ce-a861-bd06d35ee0ab\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.170245 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"4df029b4-8135-45ce-a861-bd06d35ee0ab\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.170370 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-libvirt-combined-ca-bundle\") pod \"4df029b4-8135-45ce-a861-bd06d35ee0ab\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.170870 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-openstack-edpm-ipam-ovn-default-certs-0\") pod \"4df029b4-8135-45ce-a861-bd06d35ee0ab\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.170993 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-inventory\") pod \"4df029b4-8135-45ce-a861-bd06d35ee0ab\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.171105 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"4df029b4-8135-45ce-a861-bd06d35ee0ab\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.171218 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-ovn-combined-ca-bundle\") pod \"4df029b4-8135-45ce-a861-bd06d35ee0ab\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.171335 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-repo-setup-combined-ca-bundle\") pod \"4df029b4-8135-45ce-a861-bd06d35ee0ab\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.171521 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"4df029b4-8135-45ce-a861-bd06d35ee0ab\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.171620 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-ssh-key-openstack-edpm-ipam\") pod \"4df029b4-8135-45ce-a861-bd06d35ee0ab\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.171802 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlpff\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-kube-api-access-tlpff\") pod \"4df029b4-8135-45ce-a861-bd06d35ee0ab\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.171982 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-nova-combined-ca-bundle\") pod \"4df029b4-8135-45ce-a861-bd06d35ee0ab\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.172090 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-neutron-metadata-combined-ca-bundle\") pod \"4df029b4-8135-45ce-a861-bd06d35ee0ab\" (UID: \"4df029b4-8135-45ce-a861-bd06d35ee0ab\") " Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.175287 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "4df029b4-8135-45ce-a861-bd06d35ee0ab" (UID: "4df029b4-8135-45ce-a861-bd06d35ee0ab"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.176398 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "4df029b4-8135-45ce-a861-bd06d35ee0ab" (UID: "4df029b4-8135-45ce-a861-bd06d35ee0ab"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.177140 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "4df029b4-8135-45ce-a861-bd06d35ee0ab" (UID: "4df029b4-8135-45ce-a861-bd06d35ee0ab"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.177164 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "4df029b4-8135-45ce-a861-bd06d35ee0ab" (UID: "4df029b4-8135-45ce-a861-bd06d35ee0ab"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.177536 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "4df029b4-8135-45ce-a861-bd06d35ee0ab" (UID: "4df029b4-8135-45ce-a861-bd06d35ee0ab"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.178513 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "4df029b4-8135-45ce-a861-bd06d35ee0ab" (UID: "4df029b4-8135-45ce-a861-bd06d35ee0ab"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.178804 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "4df029b4-8135-45ce-a861-bd06d35ee0ab" (UID: "4df029b4-8135-45ce-a861-bd06d35ee0ab"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.178895 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "4df029b4-8135-45ce-a861-bd06d35ee0ab" (UID: "4df029b4-8135-45ce-a861-bd06d35ee0ab"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.180522 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-kube-api-access-tlpff" (OuterVolumeSpecName: "kube-api-access-tlpff") pod "4df029b4-8135-45ce-a861-bd06d35ee0ab" (UID: "4df029b4-8135-45ce-a861-bd06d35ee0ab"). InnerVolumeSpecName "kube-api-access-tlpff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.180684 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "4df029b4-8135-45ce-a861-bd06d35ee0ab" (UID: "4df029b4-8135-45ce-a861-bd06d35ee0ab"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.181052 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "4df029b4-8135-45ce-a861-bd06d35ee0ab" (UID: "4df029b4-8135-45ce-a861-bd06d35ee0ab"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.183981 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "4df029b4-8135-45ce-a861-bd06d35ee0ab" (UID: "4df029b4-8135-45ce-a861-bd06d35ee0ab"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.202630 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4df029b4-8135-45ce-a861-bd06d35ee0ab" (UID: "4df029b4-8135-45ce-a861-bd06d35ee0ab"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.206958 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-inventory" (OuterVolumeSpecName: "inventory") pod "4df029b4-8135-45ce-a861-bd06d35ee0ab" (UID: "4df029b4-8135-45ce-a861-bd06d35ee0ab"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.275132 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlpff\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-kube-api-access-tlpff\") on node \"crc\" DevicePath \"\"" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.275167 4904 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.275178 4904 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.275190 4904 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.275202 4904 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.275212 4904 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.275222 4904 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.275231 4904 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.275242 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.275251 4904 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.275263 4904 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.275275 4904 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.275283 4904 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4df029b4-8135-45ce-a861-bd06d35ee0ab-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.275293 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4df029b4-8135-45ce-a861-bd06d35ee0ab-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.614353 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" event={"ID":"4df029b4-8135-45ce-a861-bd06d35ee0ab","Type":"ContainerDied","Data":"cbe1ca010f6b61a78af9b7ee4f65885d1bf50fed731aee20360ff0d5e7f23d89"} Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.614411 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbe1ca010f6b61a78af9b7ee4f65885d1bf50fed731aee20360ff0d5e7f23d89" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.614455 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d26ck" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.753102 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn"] Feb 23 10:39:12 crc kubenswrapper[4904]: E0223 10:39:12.753603 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4df029b4-8135-45ce-a861-bd06d35ee0ab" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.753628 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="4df029b4-8135-45ce-a861-bd06d35ee0ab" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.753889 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="4df029b4-8135-45ce-a861-bd06d35ee0ab" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.754698 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.757959 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.758210 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.758317 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.758346 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-c72bm" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.759173 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.790204 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/134b7bac-9265-4190-bcdc-847e77ecfce3-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srcxn\" (UID: \"134b7bac-9265-4190-bcdc-847e77ecfce3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.790269 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rtvx\" (UniqueName: \"kubernetes.io/projected/134b7bac-9265-4190-bcdc-847e77ecfce3-kube-api-access-5rtvx\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srcxn\" (UID: \"134b7bac-9265-4190-bcdc-847e77ecfce3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.790385 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/134b7bac-9265-4190-bcdc-847e77ecfce3-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srcxn\" (UID: \"134b7bac-9265-4190-bcdc-847e77ecfce3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.790476 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/134b7bac-9265-4190-bcdc-847e77ecfce3-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srcxn\" (UID: \"134b7bac-9265-4190-bcdc-847e77ecfce3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.790535 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/134b7bac-9265-4190-bcdc-847e77ecfce3-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srcxn\" (UID: \"134b7bac-9265-4190-bcdc-847e77ecfce3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.810088 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn"] Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.892227 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/134b7bac-9265-4190-bcdc-847e77ecfce3-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srcxn\" (UID: \"134b7bac-9265-4190-bcdc-847e77ecfce3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.892347 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/134b7bac-9265-4190-bcdc-847e77ecfce3-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srcxn\" (UID: \"134b7bac-9265-4190-bcdc-847e77ecfce3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.892380 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rtvx\" (UniqueName: \"kubernetes.io/projected/134b7bac-9265-4190-bcdc-847e77ecfce3-kube-api-access-5rtvx\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srcxn\" (UID: \"134b7bac-9265-4190-bcdc-847e77ecfce3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.892419 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/134b7bac-9265-4190-bcdc-847e77ecfce3-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srcxn\" (UID: \"134b7bac-9265-4190-bcdc-847e77ecfce3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.892468 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/134b7bac-9265-4190-bcdc-847e77ecfce3-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srcxn\" (UID: \"134b7bac-9265-4190-bcdc-847e77ecfce3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.895231 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/134b7bac-9265-4190-bcdc-847e77ecfce3-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srcxn\" (UID: \"134b7bac-9265-4190-bcdc-847e77ecfce3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.898097 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/134b7bac-9265-4190-bcdc-847e77ecfce3-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srcxn\" (UID: \"134b7bac-9265-4190-bcdc-847e77ecfce3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.899197 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/134b7bac-9265-4190-bcdc-847e77ecfce3-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srcxn\" (UID: \"134b7bac-9265-4190-bcdc-847e77ecfce3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.900541 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/134b7bac-9265-4190-bcdc-847e77ecfce3-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srcxn\" (UID: \"134b7bac-9265-4190-bcdc-847e77ecfce3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn" Feb 23 10:39:12 crc kubenswrapper[4904]: I0223 10:39:12.918784 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rtvx\" (UniqueName: \"kubernetes.io/projected/134b7bac-9265-4190-bcdc-847e77ecfce3-kube-api-access-5rtvx\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srcxn\" (UID: \"134b7bac-9265-4190-bcdc-847e77ecfce3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn" Feb 23 10:39:13 crc kubenswrapper[4904]: I0223 10:39:13.073272 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn" Feb 23 10:39:13 crc kubenswrapper[4904]: I0223 10:39:13.687246 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn"] Feb 23 10:39:14 crc kubenswrapper[4904]: I0223 10:39:14.644914 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn" event={"ID":"134b7bac-9265-4190-bcdc-847e77ecfce3","Type":"ContainerStarted","Data":"23241ff3baed39385fbca25a458b50bb6f54c590fb6707957daf52eb03621bb9"} Feb 23 10:39:14 crc kubenswrapper[4904]: I0223 10:39:14.645237 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn" event={"ID":"134b7bac-9265-4190-bcdc-847e77ecfce3","Type":"ContainerStarted","Data":"8e22ac4cba8a088f114bcd09235ccd79bbf51ed9fdc2c7c4af0534e59a06dfd8"} Feb 23 10:39:14 crc kubenswrapper[4904]: I0223 10:39:14.678171 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn" podStartSLOduration=2.171540859 podStartE2EDuration="2.678142813s" podCreationTimestamp="2026-02-23 10:39:12 +0000 UTC" firstStartedPulling="2026-02-23 10:39:13.676403182 +0000 UTC m=+1987.096776695" lastFinishedPulling="2026-02-23 10:39:14.183005096 +0000 UTC m=+1987.603378649" observedRunningTime="2026-02-23 10:39:14.672204044 +0000 UTC m=+1988.092577607" watchObservedRunningTime="2026-02-23 10:39:14.678142813 +0000 UTC m=+1988.098516386" Feb 23 10:40:17 crc kubenswrapper[4904]: I0223 10:40:17.398540 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:40:17 crc kubenswrapper[4904]: I0223 10:40:17.399058 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:40:20 crc kubenswrapper[4904]: I0223 10:40:20.378480 4904 generic.go:334] "Generic (PLEG): container finished" podID="134b7bac-9265-4190-bcdc-847e77ecfce3" containerID="23241ff3baed39385fbca25a458b50bb6f54c590fb6707957daf52eb03621bb9" exitCode=0 Feb 23 10:40:20 crc kubenswrapper[4904]: I0223 10:40:20.378625 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn" event={"ID":"134b7bac-9265-4190-bcdc-847e77ecfce3","Type":"ContainerDied","Data":"23241ff3baed39385fbca25a458b50bb6f54c590fb6707957daf52eb03621bb9"} Feb 23 10:40:21 crc kubenswrapper[4904]: I0223 10:40:21.957934 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.046436 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/134b7bac-9265-4190-bcdc-847e77ecfce3-ovn-combined-ca-bundle\") pod \"134b7bac-9265-4190-bcdc-847e77ecfce3\" (UID: \"134b7bac-9265-4190-bcdc-847e77ecfce3\") " Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.046776 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/134b7bac-9265-4190-bcdc-847e77ecfce3-inventory\") pod \"134b7bac-9265-4190-bcdc-847e77ecfce3\" (UID: \"134b7bac-9265-4190-bcdc-847e77ecfce3\") " Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.046840 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/134b7bac-9265-4190-bcdc-847e77ecfce3-ovncontroller-config-0\") pod \"134b7bac-9265-4190-bcdc-847e77ecfce3\" (UID: \"134b7bac-9265-4190-bcdc-847e77ecfce3\") " Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.046950 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rtvx\" (UniqueName: \"kubernetes.io/projected/134b7bac-9265-4190-bcdc-847e77ecfce3-kube-api-access-5rtvx\") pod \"134b7bac-9265-4190-bcdc-847e77ecfce3\" (UID: \"134b7bac-9265-4190-bcdc-847e77ecfce3\") " Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.047132 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/134b7bac-9265-4190-bcdc-847e77ecfce3-ssh-key-openstack-edpm-ipam\") pod \"134b7bac-9265-4190-bcdc-847e77ecfce3\" (UID: \"134b7bac-9265-4190-bcdc-847e77ecfce3\") " Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.054737 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/134b7bac-9265-4190-bcdc-847e77ecfce3-kube-api-access-5rtvx" (OuterVolumeSpecName: "kube-api-access-5rtvx") pod "134b7bac-9265-4190-bcdc-847e77ecfce3" (UID: "134b7bac-9265-4190-bcdc-847e77ecfce3"). InnerVolumeSpecName "kube-api-access-5rtvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.055033 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/134b7bac-9265-4190-bcdc-847e77ecfce3-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "134b7bac-9265-4190-bcdc-847e77ecfce3" (UID: "134b7bac-9265-4190-bcdc-847e77ecfce3"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.080627 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/134b7bac-9265-4190-bcdc-847e77ecfce3-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "134b7bac-9265-4190-bcdc-847e77ecfce3" (UID: "134b7bac-9265-4190-bcdc-847e77ecfce3"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.085445 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/134b7bac-9265-4190-bcdc-847e77ecfce3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "134b7bac-9265-4190-bcdc-847e77ecfce3" (UID: "134b7bac-9265-4190-bcdc-847e77ecfce3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.086149 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/134b7bac-9265-4190-bcdc-847e77ecfce3-inventory" (OuterVolumeSpecName: "inventory") pod "134b7bac-9265-4190-bcdc-847e77ecfce3" (UID: "134b7bac-9265-4190-bcdc-847e77ecfce3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.150761 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rtvx\" (UniqueName: \"kubernetes.io/projected/134b7bac-9265-4190-bcdc-847e77ecfce3-kube-api-access-5rtvx\") on node \"crc\" DevicePath \"\"" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.151251 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/134b7bac-9265-4190-bcdc-847e77ecfce3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.151523 4904 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/134b7bac-9265-4190-bcdc-847e77ecfce3-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.151755 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/134b7bac-9265-4190-bcdc-847e77ecfce3-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.151966 4904 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/134b7bac-9265-4190-bcdc-847e77ecfce3-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.404849 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn" event={"ID":"134b7bac-9265-4190-bcdc-847e77ecfce3","Type":"ContainerDied","Data":"8e22ac4cba8a088f114bcd09235ccd79bbf51ed9fdc2c7c4af0534e59a06dfd8"} Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.404885 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srcxn" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.404912 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e22ac4cba8a088f114bcd09235ccd79bbf51ed9fdc2c7c4af0534e59a06dfd8" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.709432 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt"] Feb 23 10:40:22 crc kubenswrapper[4904]: E0223 10:40:22.709937 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134b7bac-9265-4190-bcdc-847e77ecfce3" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.709954 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="134b7bac-9265-4190-bcdc-847e77ecfce3" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.710208 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="134b7bac-9265-4190-bcdc-847e77ecfce3" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.710961 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.715142 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.715238 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.715404 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.715546 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.715814 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.717633 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-c72bm" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.730443 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt"] Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.873678 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt\" (UID: \"acb3154a-4b24-44c3-88f3-0c769ca1354d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.873957 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt\" (UID: \"acb3154a-4b24-44c3-88f3-0c769ca1354d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.874084 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt\" (UID: \"acb3154a-4b24-44c3-88f3-0c769ca1354d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.874153 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr6hl\" (UniqueName: \"kubernetes.io/projected/acb3154a-4b24-44c3-88f3-0c769ca1354d-kube-api-access-pr6hl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt\" (UID: \"acb3154a-4b24-44c3-88f3-0c769ca1354d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.874204 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt\" (UID: \"acb3154a-4b24-44c3-88f3-0c769ca1354d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.874381 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt\" (UID: \"acb3154a-4b24-44c3-88f3-0c769ca1354d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.976353 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt\" (UID: \"acb3154a-4b24-44c3-88f3-0c769ca1354d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.976482 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt\" (UID: \"acb3154a-4b24-44c3-88f3-0c769ca1354d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.976539 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt\" (UID: \"acb3154a-4b24-44c3-88f3-0c769ca1354d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.976570 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr6hl\" (UniqueName: \"kubernetes.io/projected/acb3154a-4b24-44c3-88f3-0c769ca1354d-kube-api-access-pr6hl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt\" (UID: \"acb3154a-4b24-44c3-88f3-0c769ca1354d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.976600 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt\" (UID: \"acb3154a-4b24-44c3-88f3-0c769ca1354d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.976677 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt\" (UID: \"acb3154a-4b24-44c3-88f3-0c769ca1354d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.982322 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt\" (UID: \"acb3154a-4b24-44c3-88f3-0c769ca1354d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.983112 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt\" (UID: \"acb3154a-4b24-44c3-88f3-0c769ca1354d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.986093 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt\" (UID: \"acb3154a-4b24-44c3-88f3-0c769ca1354d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.986562 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt\" (UID: \"acb3154a-4b24-44c3-88f3-0c769ca1354d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.992951 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt\" (UID: \"acb3154a-4b24-44c3-88f3-0c769ca1354d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" Feb 23 10:40:22 crc kubenswrapper[4904]: I0223 10:40:22.999794 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr6hl\" (UniqueName: \"kubernetes.io/projected/acb3154a-4b24-44c3-88f3-0c769ca1354d-kube-api-access-pr6hl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt\" (UID: \"acb3154a-4b24-44c3-88f3-0c769ca1354d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" Feb 23 10:40:23 crc kubenswrapper[4904]: I0223 10:40:23.066338 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" Feb 23 10:40:23 crc kubenswrapper[4904]: I0223 10:40:23.667957 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt"] Feb 23 10:40:23 crc kubenswrapper[4904]: W0223 10:40:23.671675 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacb3154a_4b24_44c3_88f3_0c769ca1354d.slice/crio-f5325db76a2692ba2f8884b3d0f2d52f0ca9a6e2c0730d303d13ed44b4ba5210 WatchSource:0}: Error finding container f5325db76a2692ba2f8884b3d0f2d52f0ca9a6e2c0730d303d13ed44b4ba5210: Status 404 returned error can't find the container with id f5325db76a2692ba2f8884b3d0f2d52f0ca9a6e2c0730d303d13ed44b4ba5210 Feb 23 10:40:24 crc kubenswrapper[4904]: I0223 10:40:24.427013 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" event={"ID":"acb3154a-4b24-44c3-88f3-0c769ca1354d","Type":"ContainerStarted","Data":"98c8067ead09d7c420c0fed793c0a1b9e5d756b151d6dd8d13ad3c991fbcbeaf"} Feb 23 10:40:24 crc kubenswrapper[4904]: I0223 10:40:24.427332 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" event={"ID":"acb3154a-4b24-44c3-88f3-0c769ca1354d","Type":"ContainerStarted","Data":"f5325db76a2692ba2f8884b3d0f2d52f0ca9a6e2c0730d303d13ed44b4ba5210"} Feb 23 10:40:24 crc kubenswrapper[4904]: I0223 10:40:24.449312 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" podStartSLOduration=2.011931499 podStartE2EDuration="2.449290811s" podCreationTimestamp="2026-02-23 10:40:22 +0000 UTC" firstStartedPulling="2026-02-23 10:40:23.674462901 +0000 UTC m=+2057.094836414" lastFinishedPulling="2026-02-23 10:40:24.111822173 +0000 UTC m=+2057.532195726" observedRunningTime="2026-02-23 10:40:24.445642017 +0000 UTC m=+2057.866015530" watchObservedRunningTime="2026-02-23 10:40:24.449290811 +0000 UTC m=+2057.869664334" Feb 23 10:40:47 crc kubenswrapper[4904]: I0223 10:40:47.398018 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:40:47 crc kubenswrapper[4904]: I0223 10:40:47.398529 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:41:15 crc kubenswrapper[4904]: I0223 10:41:15.048421 4904 generic.go:334] "Generic (PLEG): container finished" podID="acb3154a-4b24-44c3-88f3-0c769ca1354d" containerID="98c8067ead09d7c420c0fed793c0a1b9e5d756b151d6dd8d13ad3c991fbcbeaf" exitCode=0 Feb 23 10:41:15 crc kubenswrapper[4904]: I0223 10:41:15.048554 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" event={"ID":"acb3154a-4b24-44c3-88f3-0c769ca1354d","Type":"ContainerDied","Data":"98c8067ead09d7c420c0fed793c0a1b9e5d756b151d6dd8d13ad3c991fbcbeaf"} Feb 23 10:41:16 crc kubenswrapper[4904]: I0223 10:41:16.466681 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" Feb 23 10:41:16 crc kubenswrapper[4904]: I0223 10:41:16.565955 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-inventory\") pod \"acb3154a-4b24-44c3-88f3-0c769ca1354d\" (UID: \"acb3154a-4b24-44c3-88f3-0c769ca1354d\") " Feb 23 10:41:16 crc kubenswrapper[4904]: I0223 10:41:16.566253 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"acb3154a-4b24-44c3-88f3-0c769ca1354d\" (UID: \"acb3154a-4b24-44c3-88f3-0c769ca1354d\") " Feb 23 10:41:16 crc kubenswrapper[4904]: I0223 10:41:16.566326 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr6hl\" (UniqueName: \"kubernetes.io/projected/acb3154a-4b24-44c3-88f3-0c769ca1354d-kube-api-access-pr6hl\") pod \"acb3154a-4b24-44c3-88f3-0c769ca1354d\" (UID: \"acb3154a-4b24-44c3-88f3-0c769ca1354d\") " Feb 23 10:41:16 crc kubenswrapper[4904]: I0223 10:41:16.566480 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-nova-metadata-neutron-config-0\") pod \"acb3154a-4b24-44c3-88f3-0c769ca1354d\" (UID: \"acb3154a-4b24-44c3-88f3-0c769ca1354d\") " Feb 23 10:41:16 crc kubenswrapper[4904]: I0223 10:41:16.566639 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-ssh-key-openstack-edpm-ipam\") pod \"acb3154a-4b24-44c3-88f3-0c769ca1354d\" (UID: \"acb3154a-4b24-44c3-88f3-0c769ca1354d\") " Feb 23 10:41:16 crc kubenswrapper[4904]: I0223 10:41:16.566806 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-neutron-metadata-combined-ca-bundle\") pod \"acb3154a-4b24-44c3-88f3-0c769ca1354d\" (UID: \"acb3154a-4b24-44c3-88f3-0c769ca1354d\") " Feb 23 10:41:16 crc kubenswrapper[4904]: I0223 10:41:16.603159 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acb3154a-4b24-44c3-88f3-0c769ca1354d-kube-api-access-pr6hl" (OuterVolumeSpecName: "kube-api-access-pr6hl") pod "acb3154a-4b24-44c3-88f3-0c769ca1354d" (UID: "acb3154a-4b24-44c3-88f3-0c769ca1354d"). InnerVolumeSpecName "kube-api-access-pr6hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:41:16 crc kubenswrapper[4904]: I0223 10:41:16.603330 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "acb3154a-4b24-44c3-88f3-0c769ca1354d" (UID: "acb3154a-4b24-44c3-88f3-0c769ca1354d"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:41:16 crc kubenswrapper[4904]: I0223 10:41:16.674105 4904 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:41:16 crc kubenswrapper[4904]: I0223 10:41:16.674155 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr6hl\" (UniqueName: \"kubernetes.io/projected/acb3154a-4b24-44c3-88f3-0c769ca1354d-kube-api-access-pr6hl\") on node \"crc\" DevicePath \"\"" Feb 23 10:41:16 crc kubenswrapper[4904]: I0223 10:41:16.688033 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "acb3154a-4b24-44c3-88f3-0c769ca1354d" (UID: "acb3154a-4b24-44c3-88f3-0c769ca1354d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:41:16 crc kubenswrapper[4904]: I0223 10:41:16.735874 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "acb3154a-4b24-44c3-88f3-0c769ca1354d" (UID: "acb3154a-4b24-44c3-88f3-0c769ca1354d"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:41:16 crc kubenswrapper[4904]: I0223 10:41:16.746882 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "acb3154a-4b24-44c3-88f3-0c769ca1354d" (UID: "acb3154a-4b24-44c3-88f3-0c769ca1354d"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:41:16 crc kubenswrapper[4904]: I0223 10:41:16.768284 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-inventory" (OuterVolumeSpecName: "inventory") pod "acb3154a-4b24-44c3-88f3-0c769ca1354d" (UID: "acb3154a-4b24-44c3-88f3-0c769ca1354d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:41:16 crc kubenswrapper[4904]: I0223 10:41:16.776478 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 10:41:16 crc kubenswrapper[4904]: I0223 10:41:16.776516 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 10:41:16 crc kubenswrapper[4904]: I0223 10:41:16.776527 4904 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 10:41:16 crc kubenswrapper[4904]: I0223 10:41:16.776539 4904 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/acb3154a-4b24-44c3-88f3-0c769ca1354d-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.068193 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" event={"ID":"acb3154a-4b24-44c3-88f3-0c769ca1354d","Type":"ContainerDied","Data":"f5325db76a2692ba2f8884b3d0f2d52f0ca9a6e2c0730d303d13ed44b4ba5210"} Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.068280 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.068283 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5325db76a2692ba2f8884b3d0f2d52f0ca9a6e2c0730d303d13ed44b4ba5210" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.193107 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb"] Feb 23 10:41:17 crc kubenswrapper[4904]: E0223 10:41:17.193509 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acb3154a-4b24-44c3-88f3-0c769ca1354d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.193527 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="acb3154a-4b24-44c3-88f3-0c769ca1354d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.193740 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="acb3154a-4b24-44c3-88f3-0c769ca1354d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.194443 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.196955 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.197166 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.197233 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.197447 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-c72bm" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.198491 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.211645 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb"] Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.287957 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb\" (UID: \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.288046 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb\" (UID: \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.288125 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxv5t\" (UniqueName: \"kubernetes.io/projected/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-kube-api-access-bxv5t\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb\" (UID: \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.288177 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb\" (UID: \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.288207 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb\" (UID: \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.390276 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb\" (UID: \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.390387 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb\" (UID: \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.390480 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxv5t\" (UniqueName: \"kubernetes.io/projected/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-kube-api-access-bxv5t\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb\" (UID: \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.390554 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb\" (UID: \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.390592 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb\" (UID: \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.395655 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb\" (UID: \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.397210 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb\" (UID: \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.397564 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.397611 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.397599 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb\" (UID: \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.397654 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.397975 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb\" (UID: \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.399338 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"414846274ece15cca21a2594f0ee6206cc2db384fa114ea705e7276249d406b9"} pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.399434 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" containerID="cri-o://414846274ece15cca21a2594f0ee6206cc2db384fa114ea705e7276249d406b9" gracePeriod=600 Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.415774 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxv5t\" (UniqueName: \"kubernetes.io/projected/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-kube-api-access-bxv5t\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb\" (UID: \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb" Feb 23 10:41:17 crc kubenswrapper[4904]: I0223 10:41:17.558611 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb" Feb 23 10:41:18 crc kubenswrapper[4904]: I0223 10:41:18.080124 4904 generic.go:334] "Generic (PLEG): container finished" podID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerID="414846274ece15cca21a2594f0ee6206cc2db384fa114ea705e7276249d406b9" exitCode=0 Feb 23 10:41:18 crc kubenswrapper[4904]: I0223 10:41:18.080209 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerDied","Data":"414846274ece15cca21a2594f0ee6206cc2db384fa114ea705e7276249d406b9"} Feb 23 10:41:18 crc kubenswrapper[4904]: I0223 10:41:18.080820 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerStarted","Data":"c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d"} Feb 23 10:41:18 crc kubenswrapper[4904]: I0223 10:41:18.080849 4904 scope.go:117] "RemoveContainer" containerID="73c3edece459f9faef12debd5251720f4d295f6a597fb5dd64165439cfc2f112" Feb 23 10:41:18 crc kubenswrapper[4904]: I0223 10:41:18.155818 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb"] Feb 23 10:41:19 crc kubenswrapper[4904]: I0223 10:41:19.099972 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb" event={"ID":"7c4a4b83-33c3-418e-a2b7-ad52490fc88a","Type":"ContainerStarted","Data":"49b4683d7b146d3d21447a00fcc47c6b21c8b821414ba2db0ebfb0771c682723"} Feb 23 10:41:19 crc kubenswrapper[4904]: I0223 10:41:19.100527 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb" event={"ID":"7c4a4b83-33c3-418e-a2b7-ad52490fc88a","Type":"ContainerStarted","Data":"b9a3f5936fc3c26e1167ecb1e72b188479832dbed058234971970c1ac8d0e2ec"} Feb 23 10:41:19 crc kubenswrapper[4904]: I0223 10:41:19.124208 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb" podStartSLOduration=1.7185416340000002 podStartE2EDuration="2.124186082s" podCreationTimestamp="2026-02-23 10:41:17 +0000 UTC" firstStartedPulling="2026-02-23 10:41:18.163136701 +0000 UTC m=+2111.583510214" lastFinishedPulling="2026-02-23 10:41:18.568781139 +0000 UTC m=+2111.989154662" observedRunningTime="2026-02-23 10:41:19.120501977 +0000 UTC m=+2112.540875500" watchObservedRunningTime="2026-02-23 10:41:19.124186082 +0000 UTC m=+2112.544559595" Feb 23 10:42:49 crc kubenswrapper[4904]: I0223 10:42:49.918062 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bqrmk"] Feb 23 10:42:49 crc kubenswrapper[4904]: I0223 10:42:49.924092 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqrmk" Feb 23 10:42:49 crc kubenswrapper[4904]: I0223 10:42:49.938495 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bqrmk"] Feb 23 10:42:49 crc kubenswrapper[4904]: I0223 10:42:49.964164 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jttml\" (UniqueName: \"kubernetes.io/projected/f77369f6-3ceb-4dcf-bdcd-f8ea703253eb-kube-api-access-jttml\") pod \"community-operators-bqrmk\" (UID: \"f77369f6-3ceb-4dcf-bdcd-f8ea703253eb\") " pod="openshift-marketplace/community-operators-bqrmk" Feb 23 10:42:49 crc kubenswrapper[4904]: I0223 10:42:49.964267 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f77369f6-3ceb-4dcf-bdcd-f8ea703253eb-catalog-content\") pod \"community-operators-bqrmk\" (UID: \"f77369f6-3ceb-4dcf-bdcd-f8ea703253eb\") " pod="openshift-marketplace/community-operators-bqrmk" Feb 23 10:42:49 crc kubenswrapper[4904]: I0223 10:42:49.964310 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f77369f6-3ceb-4dcf-bdcd-f8ea703253eb-utilities\") pod \"community-operators-bqrmk\" (UID: \"f77369f6-3ceb-4dcf-bdcd-f8ea703253eb\") " pod="openshift-marketplace/community-operators-bqrmk" Feb 23 10:42:50 crc kubenswrapper[4904]: I0223 10:42:50.065398 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jttml\" (UniqueName: \"kubernetes.io/projected/f77369f6-3ceb-4dcf-bdcd-f8ea703253eb-kube-api-access-jttml\") pod \"community-operators-bqrmk\" (UID: \"f77369f6-3ceb-4dcf-bdcd-f8ea703253eb\") " pod="openshift-marketplace/community-operators-bqrmk" Feb 23 10:42:50 crc kubenswrapper[4904]: I0223 10:42:50.065469 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f77369f6-3ceb-4dcf-bdcd-f8ea703253eb-catalog-content\") pod \"community-operators-bqrmk\" (UID: \"f77369f6-3ceb-4dcf-bdcd-f8ea703253eb\") " pod="openshift-marketplace/community-operators-bqrmk" Feb 23 10:42:50 crc kubenswrapper[4904]: I0223 10:42:50.065491 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f77369f6-3ceb-4dcf-bdcd-f8ea703253eb-utilities\") pod \"community-operators-bqrmk\" (UID: \"f77369f6-3ceb-4dcf-bdcd-f8ea703253eb\") " pod="openshift-marketplace/community-operators-bqrmk" Feb 23 10:42:50 crc kubenswrapper[4904]: I0223 10:42:50.066010 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f77369f6-3ceb-4dcf-bdcd-f8ea703253eb-utilities\") pod \"community-operators-bqrmk\" (UID: \"f77369f6-3ceb-4dcf-bdcd-f8ea703253eb\") " pod="openshift-marketplace/community-operators-bqrmk" Feb 23 10:42:50 crc kubenswrapper[4904]: I0223 10:42:50.066027 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f77369f6-3ceb-4dcf-bdcd-f8ea703253eb-catalog-content\") pod \"community-operators-bqrmk\" (UID: \"f77369f6-3ceb-4dcf-bdcd-f8ea703253eb\") " pod="openshift-marketplace/community-operators-bqrmk" Feb 23 10:42:50 crc kubenswrapper[4904]: I0223 10:42:50.102595 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jttml\" (UniqueName: \"kubernetes.io/projected/f77369f6-3ceb-4dcf-bdcd-f8ea703253eb-kube-api-access-jttml\") pod \"community-operators-bqrmk\" (UID: \"f77369f6-3ceb-4dcf-bdcd-f8ea703253eb\") " pod="openshift-marketplace/community-operators-bqrmk" Feb 23 10:42:50 crc kubenswrapper[4904]: I0223 10:42:50.260892 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqrmk" Feb 23 10:42:50 crc kubenswrapper[4904]: I0223 10:42:50.809480 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bqrmk"] Feb 23 10:42:51 crc kubenswrapper[4904]: I0223 10:42:51.141858 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqrmk" event={"ID":"f77369f6-3ceb-4dcf-bdcd-f8ea703253eb","Type":"ContainerStarted","Data":"f408c9807f523c9c21a6b697a14a37ba714c8c1a304696b4d2688a52c2ec806b"} Feb 23 10:42:51 crc kubenswrapper[4904]: I0223 10:42:51.142175 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqrmk" event={"ID":"f77369f6-3ceb-4dcf-bdcd-f8ea703253eb","Type":"ContainerStarted","Data":"60a03bfd2cf3e53cc0a32516a010ed7aca6f01bf7fd569257bf364ba136ce89f"} Feb 23 10:42:52 crc kubenswrapper[4904]: I0223 10:42:52.169697 4904 generic.go:334] "Generic (PLEG): container finished" podID="f77369f6-3ceb-4dcf-bdcd-f8ea703253eb" containerID="f408c9807f523c9c21a6b697a14a37ba714c8c1a304696b4d2688a52c2ec806b" exitCode=0 Feb 23 10:42:52 crc kubenswrapper[4904]: I0223 10:42:52.169841 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqrmk" event={"ID":"f77369f6-3ceb-4dcf-bdcd-f8ea703253eb","Type":"ContainerDied","Data":"f408c9807f523c9c21a6b697a14a37ba714c8c1a304696b4d2688a52c2ec806b"} Feb 23 10:42:53 crc kubenswrapper[4904]: I0223 10:42:53.180984 4904 generic.go:334] "Generic (PLEG): container finished" podID="f77369f6-3ceb-4dcf-bdcd-f8ea703253eb" containerID="e9736e1678e1eb92240dab1cd121b196c314b8e0cd5f2b0dd70c2ae5facb7191" exitCode=0 Feb 23 10:42:53 crc kubenswrapper[4904]: I0223 10:42:53.181049 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqrmk" event={"ID":"f77369f6-3ceb-4dcf-bdcd-f8ea703253eb","Type":"ContainerDied","Data":"e9736e1678e1eb92240dab1cd121b196c314b8e0cd5f2b0dd70c2ae5facb7191"} Feb 23 10:42:53 crc kubenswrapper[4904]: I0223 10:42:53.182910 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 10:42:54 crc kubenswrapper[4904]: I0223 10:42:54.196574 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqrmk" event={"ID":"f77369f6-3ceb-4dcf-bdcd-f8ea703253eb","Type":"ContainerStarted","Data":"595bcd2d53a41840c98f6883c8ad11b40eb1f33015d412f4b7816892ef844836"} Feb 23 10:42:54 crc kubenswrapper[4904]: I0223 10:42:54.234215 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bqrmk" podStartSLOduration=2.810108853 podStartE2EDuration="5.234192246s" podCreationTimestamp="2026-02-23 10:42:49 +0000 UTC" firstStartedPulling="2026-02-23 10:42:51.143787586 +0000 UTC m=+2204.564161099" lastFinishedPulling="2026-02-23 10:42:53.567870949 +0000 UTC m=+2206.988244492" observedRunningTime="2026-02-23 10:42:54.223970925 +0000 UTC m=+2207.644344448" watchObservedRunningTime="2026-02-23 10:42:54.234192246 +0000 UTC m=+2207.654565769" Feb 23 10:43:00 crc kubenswrapper[4904]: I0223 10:43:00.261282 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bqrmk" Feb 23 10:43:00 crc kubenswrapper[4904]: I0223 10:43:00.262116 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bqrmk" Feb 23 10:43:00 crc kubenswrapper[4904]: I0223 10:43:00.344523 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bqrmk" Feb 23 10:43:01 crc kubenswrapper[4904]: I0223 10:43:01.374865 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bqrmk" Feb 23 10:43:01 crc kubenswrapper[4904]: I0223 10:43:01.462784 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bqrmk"] Feb 23 10:43:03 crc kubenswrapper[4904]: I0223 10:43:03.323175 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bqrmk" podUID="f77369f6-3ceb-4dcf-bdcd-f8ea703253eb" containerName="registry-server" containerID="cri-o://595bcd2d53a41840c98f6883c8ad11b40eb1f33015d412f4b7816892ef844836" gracePeriod=2 Feb 23 10:43:03 crc kubenswrapper[4904]: I0223 10:43:03.818628 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqrmk" Feb 23 10:43:03 crc kubenswrapper[4904]: I0223 10:43:03.932459 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f77369f6-3ceb-4dcf-bdcd-f8ea703253eb-catalog-content\") pod \"f77369f6-3ceb-4dcf-bdcd-f8ea703253eb\" (UID: \"f77369f6-3ceb-4dcf-bdcd-f8ea703253eb\") " Feb 23 10:43:03 crc kubenswrapper[4904]: I0223 10:43:03.932695 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f77369f6-3ceb-4dcf-bdcd-f8ea703253eb-utilities\") pod \"f77369f6-3ceb-4dcf-bdcd-f8ea703253eb\" (UID: \"f77369f6-3ceb-4dcf-bdcd-f8ea703253eb\") " Feb 23 10:43:03 crc kubenswrapper[4904]: I0223 10:43:03.932861 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jttml\" (UniqueName: \"kubernetes.io/projected/f77369f6-3ceb-4dcf-bdcd-f8ea703253eb-kube-api-access-jttml\") pod \"f77369f6-3ceb-4dcf-bdcd-f8ea703253eb\" (UID: \"f77369f6-3ceb-4dcf-bdcd-f8ea703253eb\") " Feb 23 10:43:03 crc kubenswrapper[4904]: I0223 10:43:03.933628 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f77369f6-3ceb-4dcf-bdcd-f8ea703253eb-utilities" (OuterVolumeSpecName: "utilities") pod "f77369f6-3ceb-4dcf-bdcd-f8ea703253eb" (UID: "f77369f6-3ceb-4dcf-bdcd-f8ea703253eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:43:03 crc kubenswrapper[4904]: I0223 10:43:03.943749 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f77369f6-3ceb-4dcf-bdcd-f8ea703253eb-kube-api-access-jttml" (OuterVolumeSpecName: "kube-api-access-jttml") pod "f77369f6-3ceb-4dcf-bdcd-f8ea703253eb" (UID: "f77369f6-3ceb-4dcf-bdcd-f8ea703253eb"). InnerVolumeSpecName "kube-api-access-jttml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:43:03 crc kubenswrapper[4904]: I0223 10:43:03.988628 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f77369f6-3ceb-4dcf-bdcd-f8ea703253eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f77369f6-3ceb-4dcf-bdcd-f8ea703253eb" (UID: "f77369f6-3ceb-4dcf-bdcd-f8ea703253eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:43:04 crc kubenswrapper[4904]: I0223 10:43:04.035120 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f77369f6-3ceb-4dcf-bdcd-f8ea703253eb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:43:04 crc kubenswrapper[4904]: I0223 10:43:04.035150 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f77369f6-3ceb-4dcf-bdcd-f8ea703253eb-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:43:04 crc kubenswrapper[4904]: I0223 10:43:04.035160 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jttml\" (UniqueName: \"kubernetes.io/projected/f77369f6-3ceb-4dcf-bdcd-f8ea703253eb-kube-api-access-jttml\") on node \"crc\" DevicePath \"\"" Feb 23 10:43:04 crc kubenswrapper[4904]: I0223 10:43:04.337050 4904 generic.go:334] "Generic (PLEG): container finished" podID="f77369f6-3ceb-4dcf-bdcd-f8ea703253eb" containerID="595bcd2d53a41840c98f6883c8ad11b40eb1f33015d412f4b7816892ef844836" exitCode=0 Feb 23 10:43:04 crc kubenswrapper[4904]: I0223 10:43:04.337161 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqrmk" event={"ID":"f77369f6-3ceb-4dcf-bdcd-f8ea703253eb","Type":"ContainerDied","Data":"595bcd2d53a41840c98f6883c8ad11b40eb1f33015d412f4b7816892ef844836"} Feb 23 10:43:04 crc kubenswrapper[4904]: I0223 10:43:04.337511 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqrmk" event={"ID":"f77369f6-3ceb-4dcf-bdcd-f8ea703253eb","Type":"ContainerDied","Data":"60a03bfd2cf3e53cc0a32516a010ed7aca6f01bf7fd569257bf364ba136ce89f"} Feb 23 10:43:04 crc kubenswrapper[4904]: I0223 10:43:04.337546 4904 scope.go:117] "RemoveContainer" containerID="595bcd2d53a41840c98f6883c8ad11b40eb1f33015d412f4b7816892ef844836" Feb 23 10:43:04 crc kubenswrapper[4904]: I0223 10:43:04.337191 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqrmk" Feb 23 10:43:04 crc kubenswrapper[4904]: I0223 10:43:04.397936 4904 scope.go:117] "RemoveContainer" containerID="e9736e1678e1eb92240dab1cd121b196c314b8e0cd5f2b0dd70c2ae5facb7191" Feb 23 10:43:04 crc kubenswrapper[4904]: I0223 10:43:04.426269 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bqrmk"] Feb 23 10:43:04 crc kubenswrapper[4904]: I0223 10:43:04.448328 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bqrmk"] Feb 23 10:43:04 crc kubenswrapper[4904]: I0223 10:43:04.454661 4904 scope.go:117] "RemoveContainer" containerID="f408c9807f523c9c21a6b697a14a37ba714c8c1a304696b4d2688a52c2ec806b" Feb 23 10:43:04 crc kubenswrapper[4904]: I0223 10:43:04.502600 4904 scope.go:117] "RemoveContainer" containerID="595bcd2d53a41840c98f6883c8ad11b40eb1f33015d412f4b7816892ef844836" Feb 23 10:43:04 crc kubenswrapper[4904]: E0223 10:43:04.503137 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"595bcd2d53a41840c98f6883c8ad11b40eb1f33015d412f4b7816892ef844836\": container with ID starting with 595bcd2d53a41840c98f6883c8ad11b40eb1f33015d412f4b7816892ef844836 not found: ID does not exist" containerID="595bcd2d53a41840c98f6883c8ad11b40eb1f33015d412f4b7816892ef844836" Feb 23 10:43:04 crc kubenswrapper[4904]: I0223 10:43:04.503249 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"595bcd2d53a41840c98f6883c8ad11b40eb1f33015d412f4b7816892ef844836"} err="failed to get container status \"595bcd2d53a41840c98f6883c8ad11b40eb1f33015d412f4b7816892ef844836\": rpc error: code = NotFound desc = could not find container \"595bcd2d53a41840c98f6883c8ad11b40eb1f33015d412f4b7816892ef844836\": container with ID starting with 595bcd2d53a41840c98f6883c8ad11b40eb1f33015d412f4b7816892ef844836 not found: ID does not exist" Feb 23 10:43:04 crc kubenswrapper[4904]: I0223 10:43:04.503348 4904 scope.go:117] "RemoveContainer" containerID="e9736e1678e1eb92240dab1cd121b196c314b8e0cd5f2b0dd70c2ae5facb7191" Feb 23 10:43:04 crc kubenswrapper[4904]: E0223 10:43:04.503887 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9736e1678e1eb92240dab1cd121b196c314b8e0cd5f2b0dd70c2ae5facb7191\": container with ID starting with e9736e1678e1eb92240dab1cd121b196c314b8e0cd5f2b0dd70c2ae5facb7191 not found: ID does not exist" containerID="e9736e1678e1eb92240dab1cd121b196c314b8e0cd5f2b0dd70c2ae5facb7191" Feb 23 10:43:04 crc kubenswrapper[4904]: I0223 10:43:04.503917 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9736e1678e1eb92240dab1cd121b196c314b8e0cd5f2b0dd70c2ae5facb7191"} err="failed to get container status \"e9736e1678e1eb92240dab1cd121b196c314b8e0cd5f2b0dd70c2ae5facb7191\": rpc error: code = NotFound desc = could not find container \"e9736e1678e1eb92240dab1cd121b196c314b8e0cd5f2b0dd70c2ae5facb7191\": container with ID starting with e9736e1678e1eb92240dab1cd121b196c314b8e0cd5f2b0dd70c2ae5facb7191 not found: ID does not exist" Feb 23 10:43:04 crc kubenswrapper[4904]: I0223 10:43:04.503942 4904 scope.go:117] "RemoveContainer" containerID="f408c9807f523c9c21a6b697a14a37ba714c8c1a304696b4d2688a52c2ec806b" Feb 23 10:43:04 crc kubenswrapper[4904]: E0223 10:43:04.504170 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f408c9807f523c9c21a6b697a14a37ba714c8c1a304696b4d2688a52c2ec806b\": container with ID starting with f408c9807f523c9c21a6b697a14a37ba714c8c1a304696b4d2688a52c2ec806b not found: ID does not exist" containerID="f408c9807f523c9c21a6b697a14a37ba714c8c1a304696b4d2688a52c2ec806b" Feb 23 10:43:04 crc kubenswrapper[4904]: I0223 10:43:04.504205 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f408c9807f523c9c21a6b697a14a37ba714c8c1a304696b4d2688a52c2ec806b"} err="failed to get container status \"f408c9807f523c9c21a6b697a14a37ba714c8c1a304696b4d2688a52c2ec806b\": rpc error: code = NotFound desc = could not find container \"f408c9807f523c9c21a6b697a14a37ba714c8c1a304696b4d2688a52c2ec806b\": container with ID starting with f408c9807f523c9c21a6b697a14a37ba714c8c1a304696b4d2688a52c2ec806b not found: ID does not exist" Feb 23 10:43:05 crc kubenswrapper[4904]: I0223 10:43:05.269986 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f77369f6-3ceb-4dcf-bdcd-f8ea703253eb" path="/var/lib/kubelet/pods/f77369f6-3ceb-4dcf-bdcd-f8ea703253eb/volumes" Feb 23 10:43:17 crc kubenswrapper[4904]: I0223 10:43:17.398047 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:43:17 crc kubenswrapper[4904]: I0223 10:43:17.398682 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:43:39 crc kubenswrapper[4904]: I0223 10:43:39.587148 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-94ng2"] Feb 23 10:43:39 crc kubenswrapper[4904]: E0223 10:43:39.589181 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f77369f6-3ceb-4dcf-bdcd-f8ea703253eb" containerName="extract-content" Feb 23 10:43:39 crc kubenswrapper[4904]: I0223 10:43:39.589214 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f77369f6-3ceb-4dcf-bdcd-f8ea703253eb" containerName="extract-content" Feb 23 10:43:39 crc kubenswrapper[4904]: E0223 10:43:39.589272 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f77369f6-3ceb-4dcf-bdcd-f8ea703253eb" containerName="registry-server" Feb 23 10:43:39 crc kubenswrapper[4904]: I0223 10:43:39.589286 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f77369f6-3ceb-4dcf-bdcd-f8ea703253eb" containerName="registry-server" Feb 23 10:43:39 crc kubenswrapper[4904]: E0223 10:43:39.589350 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f77369f6-3ceb-4dcf-bdcd-f8ea703253eb" containerName="extract-utilities" Feb 23 10:43:39 crc kubenswrapper[4904]: I0223 10:43:39.589367 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f77369f6-3ceb-4dcf-bdcd-f8ea703253eb" containerName="extract-utilities" Feb 23 10:43:39 crc kubenswrapper[4904]: I0223 10:43:39.590314 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f77369f6-3ceb-4dcf-bdcd-f8ea703253eb" containerName="registry-server" Feb 23 10:43:39 crc kubenswrapper[4904]: I0223 10:43:39.595260 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94ng2" Feb 23 10:43:39 crc kubenswrapper[4904]: I0223 10:43:39.613156 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-94ng2"] Feb 23 10:43:39 crc kubenswrapper[4904]: I0223 10:43:39.750028 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e8461c-7f4c-4402-8af5-ed9074c8a536-utilities\") pod \"redhat-operators-94ng2\" (UID: \"03e8461c-7f4c-4402-8af5-ed9074c8a536\") " pod="openshift-marketplace/redhat-operators-94ng2" Feb 23 10:43:39 crc kubenswrapper[4904]: I0223 10:43:39.750108 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e8461c-7f4c-4402-8af5-ed9074c8a536-catalog-content\") pod \"redhat-operators-94ng2\" (UID: \"03e8461c-7f4c-4402-8af5-ed9074c8a536\") " pod="openshift-marketplace/redhat-operators-94ng2" Feb 23 10:43:39 crc kubenswrapper[4904]: I0223 10:43:39.750201 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvs8v\" (UniqueName: \"kubernetes.io/projected/03e8461c-7f4c-4402-8af5-ed9074c8a536-kube-api-access-vvs8v\") pod \"redhat-operators-94ng2\" (UID: \"03e8461c-7f4c-4402-8af5-ed9074c8a536\") " pod="openshift-marketplace/redhat-operators-94ng2" Feb 23 10:43:39 crc kubenswrapper[4904]: I0223 10:43:39.852398 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e8461c-7f4c-4402-8af5-ed9074c8a536-utilities\") pod \"redhat-operators-94ng2\" (UID: \"03e8461c-7f4c-4402-8af5-ed9074c8a536\") " pod="openshift-marketplace/redhat-operators-94ng2" Feb 23 10:43:39 crc kubenswrapper[4904]: I0223 10:43:39.852458 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e8461c-7f4c-4402-8af5-ed9074c8a536-catalog-content\") pod \"redhat-operators-94ng2\" (UID: \"03e8461c-7f4c-4402-8af5-ed9074c8a536\") " pod="openshift-marketplace/redhat-operators-94ng2" Feb 23 10:43:39 crc kubenswrapper[4904]: I0223 10:43:39.852536 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvs8v\" (UniqueName: \"kubernetes.io/projected/03e8461c-7f4c-4402-8af5-ed9074c8a536-kube-api-access-vvs8v\") pod \"redhat-operators-94ng2\" (UID: \"03e8461c-7f4c-4402-8af5-ed9074c8a536\") " pod="openshift-marketplace/redhat-operators-94ng2" Feb 23 10:43:39 crc kubenswrapper[4904]: I0223 10:43:39.853256 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e8461c-7f4c-4402-8af5-ed9074c8a536-utilities\") pod \"redhat-operators-94ng2\" (UID: \"03e8461c-7f4c-4402-8af5-ed9074c8a536\") " pod="openshift-marketplace/redhat-operators-94ng2" Feb 23 10:43:39 crc kubenswrapper[4904]: I0223 10:43:39.853381 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e8461c-7f4c-4402-8af5-ed9074c8a536-catalog-content\") pod \"redhat-operators-94ng2\" (UID: \"03e8461c-7f4c-4402-8af5-ed9074c8a536\") " pod="openshift-marketplace/redhat-operators-94ng2" Feb 23 10:43:39 crc kubenswrapper[4904]: I0223 10:43:39.878416 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvs8v\" (UniqueName: \"kubernetes.io/projected/03e8461c-7f4c-4402-8af5-ed9074c8a536-kube-api-access-vvs8v\") pod \"redhat-operators-94ng2\" (UID: \"03e8461c-7f4c-4402-8af5-ed9074c8a536\") " pod="openshift-marketplace/redhat-operators-94ng2" Feb 23 10:43:39 crc kubenswrapper[4904]: I0223 10:43:39.944103 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94ng2" Feb 23 10:43:40 crc kubenswrapper[4904]: I0223 10:43:40.413273 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-94ng2"] Feb 23 10:43:40 crc kubenswrapper[4904]: I0223 10:43:40.780382 4904 generic.go:334] "Generic (PLEG): container finished" podID="03e8461c-7f4c-4402-8af5-ed9074c8a536" containerID="c6d06b5ac37f60c77a038024798059802bc1088331782bdd4a9199c320c2d481" exitCode=0 Feb 23 10:43:40 crc kubenswrapper[4904]: I0223 10:43:40.780437 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94ng2" event={"ID":"03e8461c-7f4c-4402-8af5-ed9074c8a536","Type":"ContainerDied","Data":"c6d06b5ac37f60c77a038024798059802bc1088331782bdd4a9199c320c2d481"} Feb 23 10:43:40 crc kubenswrapper[4904]: I0223 10:43:40.780497 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94ng2" event={"ID":"03e8461c-7f4c-4402-8af5-ed9074c8a536","Type":"ContainerStarted","Data":"c0d4fcdca030d8865d570bbbdb48c45cd27ef756ffc5fb26f67b53479559eef7"} Feb 23 10:43:42 crc kubenswrapper[4904]: I0223 10:43:42.808010 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94ng2" event={"ID":"03e8461c-7f4c-4402-8af5-ed9074c8a536","Type":"ContainerStarted","Data":"53df5af751b76190766e80fb5543f35f55b071e50ec975fecc67ddd71797dffd"} Feb 23 10:43:46 crc kubenswrapper[4904]: I0223 10:43:46.857729 4904 generic.go:334] "Generic (PLEG): container finished" podID="03e8461c-7f4c-4402-8af5-ed9074c8a536" containerID="53df5af751b76190766e80fb5543f35f55b071e50ec975fecc67ddd71797dffd" exitCode=0 Feb 23 10:43:46 crc kubenswrapper[4904]: I0223 10:43:46.858382 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94ng2" event={"ID":"03e8461c-7f4c-4402-8af5-ed9074c8a536","Type":"ContainerDied","Data":"53df5af751b76190766e80fb5543f35f55b071e50ec975fecc67ddd71797dffd"} Feb 23 10:43:47 crc kubenswrapper[4904]: I0223 10:43:47.397962 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:43:47 crc kubenswrapper[4904]: I0223 10:43:47.398252 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:43:47 crc kubenswrapper[4904]: I0223 10:43:47.876015 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94ng2" event={"ID":"03e8461c-7f4c-4402-8af5-ed9074c8a536","Type":"ContainerStarted","Data":"43afcb406333a7e3045f962292e6ee993c7a0e28cb339ef3a7885e0618964aa3"} Feb 23 10:43:49 crc kubenswrapper[4904]: I0223 10:43:49.944981 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-94ng2" Feb 23 10:43:49 crc kubenswrapper[4904]: I0223 10:43:49.945360 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-94ng2" Feb 23 10:43:51 crc kubenswrapper[4904]: I0223 10:43:51.009468 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-94ng2" podUID="03e8461c-7f4c-4402-8af5-ed9074c8a536" containerName="registry-server" probeResult="failure" output=< Feb 23 10:43:51 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Feb 23 10:43:51 crc kubenswrapper[4904]: > Feb 23 10:44:00 crc kubenswrapper[4904]: I0223 10:44:00.042296 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-94ng2" Feb 23 10:44:00 crc kubenswrapper[4904]: I0223 10:44:00.079556 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-94ng2" podStartSLOduration=14.608781464 podStartE2EDuration="21.079533542s" podCreationTimestamp="2026-02-23 10:43:39 +0000 UTC" firstStartedPulling="2026-02-23 10:43:40.782258159 +0000 UTC m=+2254.202631672" lastFinishedPulling="2026-02-23 10:43:47.253010237 +0000 UTC m=+2260.673383750" observedRunningTime="2026-02-23 10:43:47.904510982 +0000 UTC m=+2261.324884485" watchObservedRunningTime="2026-02-23 10:44:00.079533542 +0000 UTC m=+2273.499907065" Feb 23 10:44:00 crc kubenswrapper[4904]: I0223 10:44:00.134647 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-94ng2" Feb 23 10:44:00 crc kubenswrapper[4904]: I0223 10:44:00.300360 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-94ng2"] Feb 23 10:44:02 crc kubenswrapper[4904]: I0223 10:44:02.040487 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-94ng2" podUID="03e8461c-7f4c-4402-8af5-ed9074c8a536" containerName="registry-server" containerID="cri-o://43afcb406333a7e3045f962292e6ee993c7a0e28cb339ef3a7885e0618964aa3" gracePeriod=2 Feb 23 10:44:02 crc kubenswrapper[4904]: I0223 10:44:02.590984 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94ng2" Feb 23 10:44:02 crc kubenswrapper[4904]: I0223 10:44:02.775580 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e8461c-7f4c-4402-8af5-ed9074c8a536-catalog-content\") pod \"03e8461c-7f4c-4402-8af5-ed9074c8a536\" (UID: \"03e8461c-7f4c-4402-8af5-ed9074c8a536\") " Feb 23 10:44:02 crc kubenswrapper[4904]: I0223 10:44:02.775857 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e8461c-7f4c-4402-8af5-ed9074c8a536-utilities\") pod \"03e8461c-7f4c-4402-8af5-ed9074c8a536\" (UID: \"03e8461c-7f4c-4402-8af5-ed9074c8a536\") " Feb 23 10:44:02 crc kubenswrapper[4904]: I0223 10:44:02.776096 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvs8v\" (UniqueName: \"kubernetes.io/projected/03e8461c-7f4c-4402-8af5-ed9074c8a536-kube-api-access-vvs8v\") pod \"03e8461c-7f4c-4402-8af5-ed9074c8a536\" (UID: \"03e8461c-7f4c-4402-8af5-ed9074c8a536\") " Feb 23 10:44:02 crc kubenswrapper[4904]: I0223 10:44:02.777211 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03e8461c-7f4c-4402-8af5-ed9074c8a536-utilities" (OuterVolumeSpecName: "utilities") pod "03e8461c-7f4c-4402-8af5-ed9074c8a536" (UID: "03e8461c-7f4c-4402-8af5-ed9074c8a536"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:44:02 crc kubenswrapper[4904]: I0223 10:44:02.788375 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03e8461c-7f4c-4402-8af5-ed9074c8a536-kube-api-access-vvs8v" (OuterVolumeSpecName: "kube-api-access-vvs8v") pod "03e8461c-7f4c-4402-8af5-ed9074c8a536" (UID: "03e8461c-7f4c-4402-8af5-ed9074c8a536"). InnerVolumeSpecName "kube-api-access-vvs8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:44:02 crc kubenswrapper[4904]: I0223 10:44:02.878737 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e8461c-7f4c-4402-8af5-ed9074c8a536-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:44:02 crc kubenswrapper[4904]: I0223 10:44:02.878776 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvs8v\" (UniqueName: \"kubernetes.io/projected/03e8461c-7f4c-4402-8af5-ed9074c8a536-kube-api-access-vvs8v\") on node \"crc\" DevicePath \"\"" Feb 23 10:44:02 crc kubenswrapper[4904]: I0223 10:44:02.941254 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03e8461c-7f4c-4402-8af5-ed9074c8a536-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03e8461c-7f4c-4402-8af5-ed9074c8a536" (UID: "03e8461c-7f4c-4402-8af5-ed9074c8a536"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:44:02 crc kubenswrapper[4904]: I0223 10:44:02.980198 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e8461c-7f4c-4402-8af5-ed9074c8a536-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:44:03 crc kubenswrapper[4904]: I0223 10:44:03.054698 4904 generic.go:334] "Generic (PLEG): container finished" podID="03e8461c-7f4c-4402-8af5-ed9074c8a536" containerID="43afcb406333a7e3045f962292e6ee993c7a0e28cb339ef3a7885e0618964aa3" exitCode=0 Feb 23 10:44:03 crc kubenswrapper[4904]: I0223 10:44:03.055024 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94ng2" Feb 23 10:44:03 crc kubenswrapper[4904]: I0223 10:44:03.055015 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94ng2" event={"ID":"03e8461c-7f4c-4402-8af5-ed9074c8a536","Type":"ContainerDied","Data":"43afcb406333a7e3045f962292e6ee993c7a0e28cb339ef3a7885e0618964aa3"} Feb 23 10:44:03 crc kubenswrapper[4904]: I0223 10:44:03.055431 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94ng2" event={"ID":"03e8461c-7f4c-4402-8af5-ed9074c8a536","Type":"ContainerDied","Data":"c0d4fcdca030d8865d570bbbdb48c45cd27ef756ffc5fb26f67b53479559eef7"} Feb 23 10:44:03 crc kubenswrapper[4904]: I0223 10:44:03.055475 4904 scope.go:117] "RemoveContainer" containerID="43afcb406333a7e3045f962292e6ee993c7a0e28cb339ef3a7885e0618964aa3" Feb 23 10:44:03 crc kubenswrapper[4904]: I0223 10:44:03.079668 4904 scope.go:117] "RemoveContainer" containerID="53df5af751b76190766e80fb5543f35f55b071e50ec975fecc67ddd71797dffd" Feb 23 10:44:03 crc kubenswrapper[4904]: I0223 10:44:03.106535 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-94ng2"] Feb 23 10:44:03 crc kubenswrapper[4904]: I0223 10:44:03.116600 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-94ng2"] Feb 23 10:44:03 crc kubenswrapper[4904]: I0223 10:44:03.162363 4904 scope.go:117] "RemoveContainer" containerID="c6d06b5ac37f60c77a038024798059802bc1088331782bdd4a9199c320c2d481" Feb 23 10:44:03 crc kubenswrapper[4904]: I0223 10:44:03.185936 4904 scope.go:117] "RemoveContainer" containerID="43afcb406333a7e3045f962292e6ee993c7a0e28cb339ef3a7885e0618964aa3" Feb 23 10:44:03 crc kubenswrapper[4904]: E0223 10:44:03.186378 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43afcb406333a7e3045f962292e6ee993c7a0e28cb339ef3a7885e0618964aa3\": container with ID starting with 43afcb406333a7e3045f962292e6ee993c7a0e28cb339ef3a7885e0618964aa3 not found: ID does not exist" containerID="43afcb406333a7e3045f962292e6ee993c7a0e28cb339ef3a7885e0618964aa3" Feb 23 10:44:03 crc kubenswrapper[4904]: I0223 10:44:03.186410 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43afcb406333a7e3045f962292e6ee993c7a0e28cb339ef3a7885e0618964aa3"} err="failed to get container status \"43afcb406333a7e3045f962292e6ee993c7a0e28cb339ef3a7885e0618964aa3\": rpc error: code = NotFound desc = could not find container \"43afcb406333a7e3045f962292e6ee993c7a0e28cb339ef3a7885e0618964aa3\": container with ID starting with 43afcb406333a7e3045f962292e6ee993c7a0e28cb339ef3a7885e0618964aa3 not found: ID does not exist" Feb 23 10:44:03 crc kubenswrapper[4904]: I0223 10:44:03.186430 4904 scope.go:117] "RemoveContainer" containerID="53df5af751b76190766e80fb5543f35f55b071e50ec975fecc67ddd71797dffd" Feb 23 10:44:03 crc kubenswrapper[4904]: E0223 10:44:03.186865 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53df5af751b76190766e80fb5543f35f55b071e50ec975fecc67ddd71797dffd\": container with ID starting with 53df5af751b76190766e80fb5543f35f55b071e50ec975fecc67ddd71797dffd not found: ID does not exist" containerID="53df5af751b76190766e80fb5543f35f55b071e50ec975fecc67ddd71797dffd" Feb 23 10:44:03 crc kubenswrapper[4904]: I0223 10:44:03.186937 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53df5af751b76190766e80fb5543f35f55b071e50ec975fecc67ddd71797dffd"} err="failed to get container status \"53df5af751b76190766e80fb5543f35f55b071e50ec975fecc67ddd71797dffd\": rpc error: code = NotFound desc = could not find container \"53df5af751b76190766e80fb5543f35f55b071e50ec975fecc67ddd71797dffd\": container with ID starting with 53df5af751b76190766e80fb5543f35f55b071e50ec975fecc67ddd71797dffd not found: ID does not exist" Feb 23 10:44:03 crc kubenswrapper[4904]: I0223 10:44:03.186983 4904 scope.go:117] "RemoveContainer" containerID="c6d06b5ac37f60c77a038024798059802bc1088331782bdd4a9199c320c2d481" Feb 23 10:44:03 crc kubenswrapper[4904]: E0223 10:44:03.187586 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6d06b5ac37f60c77a038024798059802bc1088331782bdd4a9199c320c2d481\": container with ID starting with c6d06b5ac37f60c77a038024798059802bc1088331782bdd4a9199c320c2d481 not found: ID does not exist" containerID="c6d06b5ac37f60c77a038024798059802bc1088331782bdd4a9199c320c2d481" Feb 23 10:44:03 crc kubenswrapper[4904]: I0223 10:44:03.187635 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6d06b5ac37f60c77a038024798059802bc1088331782bdd4a9199c320c2d481"} err="failed to get container status \"c6d06b5ac37f60c77a038024798059802bc1088331782bdd4a9199c320c2d481\": rpc error: code = NotFound desc = could not find container \"c6d06b5ac37f60c77a038024798059802bc1088331782bdd4a9199c320c2d481\": container with ID starting with c6d06b5ac37f60c77a038024798059802bc1088331782bdd4a9199c320c2d481 not found: ID does not exist" Feb 23 10:44:03 crc kubenswrapper[4904]: I0223 10:44:03.268576 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03e8461c-7f4c-4402-8af5-ed9074c8a536" path="/var/lib/kubelet/pods/03e8461c-7f4c-4402-8af5-ed9074c8a536/volumes" Feb 23 10:44:17 crc kubenswrapper[4904]: I0223 10:44:17.397943 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:44:17 crc kubenswrapper[4904]: I0223 10:44:17.398897 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:44:17 crc kubenswrapper[4904]: I0223 10:44:17.399021 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:44:17 crc kubenswrapper[4904]: I0223 10:44:17.400408 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d"} pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 10:44:17 crc kubenswrapper[4904]: I0223 10:44:17.400499 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" containerID="cri-o://c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" gracePeriod=600 Feb 23 10:44:17 crc kubenswrapper[4904]: E0223 10:44:17.530540 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:44:18 crc kubenswrapper[4904]: I0223 10:44:18.219555 4904 generic.go:334] "Generic (PLEG): container finished" podID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" exitCode=0 Feb 23 10:44:18 crc kubenswrapper[4904]: I0223 10:44:18.219619 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerDied","Data":"c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d"} Feb 23 10:44:18 crc kubenswrapper[4904]: I0223 10:44:18.219648 4904 scope.go:117] "RemoveContainer" containerID="414846274ece15cca21a2594f0ee6206cc2db384fa114ea705e7276249d406b9" Feb 23 10:44:18 crc kubenswrapper[4904]: I0223 10:44:18.220373 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:44:18 crc kubenswrapper[4904]: E0223 10:44:18.220767 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:44:31 crc kubenswrapper[4904]: I0223 10:44:31.285441 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:44:31 crc kubenswrapper[4904]: E0223 10:44:31.286903 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:44:31 crc kubenswrapper[4904]: I0223 10:44:31.545799 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w877m"] Feb 23 10:44:31 crc kubenswrapper[4904]: E0223 10:44:31.546479 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e8461c-7f4c-4402-8af5-ed9074c8a536" containerName="extract-utilities" Feb 23 10:44:31 crc kubenswrapper[4904]: I0223 10:44:31.546496 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e8461c-7f4c-4402-8af5-ed9074c8a536" containerName="extract-utilities" Feb 23 10:44:31 crc kubenswrapper[4904]: E0223 10:44:31.546536 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e8461c-7f4c-4402-8af5-ed9074c8a536" containerName="registry-server" Feb 23 10:44:31 crc kubenswrapper[4904]: I0223 10:44:31.546543 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e8461c-7f4c-4402-8af5-ed9074c8a536" containerName="registry-server" Feb 23 10:44:31 crc kubenswrapper[4904]: E0223 10:44:31.546557 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e8461c-7f4c-4402-8af5-ed9074c8a536" containerName="extract-content" Feb 23 10:44:31 crc kubenswrapper[4904]: I0223 10:44:31.546562 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e8461c-7f4c-4402-8af5-ed9074c8a536" containerName="extract-content" Feb 23 10:44:31 crc kubenswrapper[4904]: I0223 10:44:31.546758 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="03e8461c-7f4c-4402-8af5-ed9074c8a536" containerName="registry-server" Feb 23 10:44:31 crc kubenswrapper[4904]: I0223 10:44:31.548114 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w877m" Feb 23 10:44:31 crc kubenswrapper[4904]: I0223 10:44:31.565669 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w877m"] Feb 23 10:44:31 crc kubenswrapper[4904]: I0223 10:44:31.701037 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b13bdb2-9077-40f0-a870-beed72325fdd-catalog-content\") pod \"certified-operators-w877m\" (UID: \"1b13bdb2-9077-40f0-a870-beed72325fdd\") " pod="openshift-marketplace/certified-operators-w877m" Feb 23 10:44:31 crc kubenswrapper[4904]: I0223 10:44:31.701136 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b13bdb2-9077-40f0-a870-beed72325fdd-utilities\") pod \"certified-operators-w877m\" (UID: \"1b13bdb2-9077-40f0-a870-beed72325fdd\") " pod="openshift-marketplace/certified-operators-w877m" Feb 23 10:44:31 crc kubenswrapper[4904]: I0223 10:44:31.701235 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwpks\" (UniqueName: \"kubernetes.io/projected/1b13bdb2-9077-40f0-a870-beed72325fdd-kube-api-access-jwpks\") pod \"certified-operators-w877m\" (UID: \"1b13bdb2-9077-40f0-a870-beed72325fdd\") " pod="openshift-marketplace/certified-operators-w877m" Feb 23 10:44:31 crc kubenswrapper[4904]: I0223 10:44:31.803204 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b13bdb2-9077-40f0-a870-beed72325fdd-catalog-content\") pod \"certified-operators-w877m\" (UID: \"1b13bdb2-9077-40f0-a870-beed72325fdd\") " pod="openshift-marketplace/certified-operators-w877m" Feb 23 10:44:31 crc kubenswrapper[4904]: I0223 10:44:31.803289 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b13bdb2-9077-40f0-a870-beed72325fdd-utilities\") pod \"certified-operators-w877m\" (UID: \"1b13bdb2-9077-40f0-a870-beed72325fdd\") " pod="openshift-marketplace/certified-operators-w877m" Feb 23 10:44:31 crc kubenswrapper[4904]: I0223 10:44:31.803319 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwpks\" (UniqueName: \"kubernetes.io/projected/1b13bdb2-9077-40f0-a870-beed72325fdd-kube-api-access-jwpks\") pod \"certified-operators-w877m\" (UID: \"1b13bdb2-9077-40f0-a870-beed72325fdd\") " pod="openshift-marketplace/certified-operators-w877m" Feb 23 10:44:31 crc kubenswrapper[4904]: I0223 10:44:31.803674 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b13bdb2-9077-40f0-a870-beed72325fdd-catalog-content\") pod \"certified-operators-w877m\" (UID: \"1b13bdb2-9077-40f0-a870-beed72325fdd\") " pod="openshift-marketplace/certified-operators-w877m" Feb 23 10:44:31 crc kubenswrapper[4904]: I0223 10:44:31.803913 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b13bdb2-9077-40f0-a870-beed72325fdd-utilities\") pod \"certified-operators-w877m\" (UID: \"1b13bdb2-9077-40f0-a870-beed72325fdd\") " pod="openshift-marketplace/certified-operators-w877m" Feb 23 10:44:31 crc kubenswrapper[4904]: I0223 10:44:31.824590 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwpks\" (UniqueName: \"kubernetes.io/projected/1b13bdb2-9077-40f0-a870-beed72325fdd-kube-api-access-jwpks\") pod \"certified-operators-w877m\" (UID: \"1b13bdb2-9077-40f0-a870-beed72325fdd\") " pod="openshift-marketplace/certified-operators-w877m" Feb 23 10:44:31 crc kubenswrapper[4904]: I0223 10:44:31.864429 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w877m" Feb 23 10:44:32 crc kubenswrapper[4904]: I0223 10:44:32.373889 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w877m"] Feb 23 10:44:32 crc kubenswrapper[4904]: I0223 10:44:32.396536 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w877m" event={"ID":"1b13bdb2-9077-40f0-a870-beed72325fdd","Type":"ContainerStarted","Data":"9152d8c6e35a7f99f661332fee25aaf499f29e2ee054f4b54390c3683943e922"} Feb 23 10:44:33 crc kubenswrapper[4904]: I0223 10:44:33.425367 4904 generic.go:334] "Generic (PLEG): container finished" podID="1b13bdb2-9077-40f0-a870-beed72325fdd" containerID="6fdc43d2cf903d8d0ed231c131cf59033f61ed64d4c0134be66db5eea0941492" exitCode=0 Feb 23 10:44:33 crc kubenswrapper[4904]: I0223 10:44:33.425465 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w877m" event={"ID":"1b13bdb2-9077-40f0-a870-beed72325fdd","Type":"ContainerDied","Data":"6fdc43d2cf903d8d0ed231c131cf59033f61ed64d4c0134be66db5eea0941492"} Feb 23 10:44:34 crc kubenswrapper[4904]: I0223 10:44:34.446333 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w877m" event={"ID":"1b13bdb2-9077-40f0-a870-beed72325fdd","Type":"ContainerStarted","Data":"b5afbe61ff6b97be19cc72cb191f2a9665134660302f480b8c8d1a5451b41f71"} Feb 23 10:44:36 crc kubenswrapper[4904]: I0223 10:44:36.470238 4904 generic.go:334] "Generic (PLEG): container finished" podID="1b13bdb2-9077-40f0-a870-beed72325fdd" containerID="b5afbe61ff6b97be19cc72cb191f2a9665134660302f480b8c8d1a5451b41f71" exitCode=0 Feb 23 10:44:36 crc kubenswrapper[4904]: I0223 10:44:36.470321 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w877m" event={"ID":"1b13bdb2-9077-40f0-a870-beed72325fdd","Type":"ContainerDied","Data":"b5afbe61ff6b97be19cc72cb191f2a9665134660302f480b8c8d1a5451b41f71"} Feb 23 10:44:37 crc kubenswrapper[4904]: I0223 10:44:37.480507 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w877m" event={"ID":"1b13bdb2-9077-40f0-a870-beed72325fdd","Type":"ContainerStarted","Data":"54e2da8e8a1342724f16d77d2077e9445a1c89de943b15fa99efd437cf15906d"} Feb 23 10:44:37 crc kubenswrapper[4904]: I0223 10:44:37.513911 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w877m" podStartSLOduration=3.10810708 podStartE2EDuration="6.513893572s" podCreationTimestamp="2026-02-23 10:44:31 +0000 UTC" firstStartedPulling="2026-02-23 10:44:33.428023452 +0000 UTC m=+2306.848396995" lastFinishedPulling="2026-02-23 10:44:36.833809974 +0000 UTC m=+2310.254183487" observedRunningTime="2026-02-23 10:44:37.509285341 +0000 UTC m=+2310.929658854" watchObservedRunningTime="2026-02-23 10:44:37.513893572 +0000 UTC m=+2310.934267095" Feb 23 10:44:41 crc kubenswrapper[4904]: I0223 10:44:41.865079 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w877m" Feb 23 10:44:41 crc kubenswrapper[4904]: I0223 10:44:41.865884 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w877m" Feb 23 10:44:41 crc kubenswrapper[4904]: I0223 10:44:41.950453 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w877m" Feb 23 10:44:42 crc kubenswrapper[4904]: I0223 10:44:42.597596 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w877m" Feb 23 10:44:42 crc kubenswrapper[4904]: I0223 10:44:42.673817 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w877m"] Feb 23 10:44:43 crc kubenswrapper[4904]: I0223 10:44:43.256251 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:44:43 crc kubenswrapper[4904]: E0223 10:44:43.256597 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:44:44 crc kubenswrapper[4904]: I0223 10:44:44.556097 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w877m" podUID="1b13bdb2-9077-40f0-a870-beed72325fdd" containerName="registry-server" containerID="cri-o://54e2da8e8a1342724f16d77d2077e9445a1c89de943b15fa99efd437cf15906d" gracePeriod=2 Feb 23 10:44:45 crc kubenswrapper[4904]: I0223 10:44:45.565233 4904 generic.go:334] "Generic (PLEG): container finished" podID="1b13bdb2-9077-40f0-a870-beed72325fdd" containerID="54e2da8e8a1342724f16d77d2077e9445a1c89de943b15fa99efd437cf15906d" exitCode=0 Feb 23 10:44:45 crc kubenswrapper[4904]: I0223 10:44:45.565353 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w877m" event={"ID":"1b13bdb2-9077-40f0-a870-beed72325fdd","Type":"ContainerDied","Data":"54e2da8e8a1342724f16d77d2077e9445a1c89de943b15fa99efd437cf15906d"} Feb 23 10:44:45 crc kubenswrapper[4904]: I0223 10:44:45.565539 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w877m" event={"ID":"1b13bdb2-9077-40f0-a870-beed72325fdd","Type":"ContainerDied","Data":"9152d8c6e35a7f99f661332fee25aaf499f29e2ee054f4b54390c3683943e922"} Feb 23 10:44:45 crc kubenswrapper[4904]: I0223 10:44:45.565552 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9152d8c6e35a7f99f661332fee25aaf499f29e2ee054f4b54390c3683943e922" Feb 23 10:44:45 crc kubenswrapper[4904]: I0223 10:44:45.646267 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w877m" Feb 23 10:44:45 crc kubenswrapper[4904]: I0223 10:44:45.696387 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b13bdb2-9077-40f0-a870-beed72325fdd-catalog-content\") pod \"1b13bdb2-9077-40f0-a870-beed72325fdd\" (UID: \"1b13bdb2-9077-40f0-a870-beed72325fdd\") " Feb 23 10:44:45 crc kubenswrapper[4904]: I0223 10:44:45.696437 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b13bdb2-9077-40f0-a870-beed72325fdd-utilities\") pod \"1b13bdb2-9077-40f0-a870-beed72325fdd\" (UID: \"1b13bdb2-9077-40f0-a870-beed72325fdd\") " Feb 23 10:44:45 crc kubenswrapper[4904]: I0223 10:44:45.696478 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwpks\" (UniqueName: \"kubernetes.io/projected/1b13bdb2-9077-40f0-a870-beed72325fdd-kube-api-access-jwpks\") pod \"1b13bdb2-9077-40f0-a870-beed72325fdd\" (UID: \"1b13bdb2-9077-40f0-a870-beed72325fdd\") " Feb 23 10:44:45 crc kubenswrapper[4904]: I0223 10:44:45.698787 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b13bdb2-9077-40f0-a870-beed72325fdd-utilities" (OuterVolumeSpecName: "utilities") pod "1b13bdb2-9077-40f0-a870-beed72325fdd" (UID: "1b13bdb2-9077-40f0-a870-beed72325fdd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:44:45 crc kubenswrapper[4904]: I0223 10:44:45.703261 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b13bdb2-9077-40f0-a870-beed72325fdd-kube-api-access-jwpks" (OuterVolumeSpecName: "kube-api-access-jwpks") pod "1b13bdb2-9077-40f0-a870-beed72325fdd" (UID: "1b13bdb2-9077-40f0-a870-beed72325fdd"). InnerVolumeSpecName "kube-api-access-jwpks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:44:45 crc kubenswrapper[4904]: I0223 10:44:45.744911 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b13bdb2-9077-40f0-a870-beed72325fdd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b13bdb2-9077-40f0-a870-beed72325fdd" (UID: "1b13bdb2-9077-40f0-a870-beed72325fdd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:44:45 crc kubenswrapper[4904]: I0223 10:44:45.799314 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b13bdb2-9077-40f0-a870-beed72325fdd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:44:45 crc kubenswrapper[4904]: I0223 10:44:45.799344 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b13bdb2-9077-40f0-a870-beed72325fdd-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:44:45 crc kubenswrapper[4904]: I0223 10:44:45.799357 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwpks\" (UniqueName: \"kubernetes.io/projected/1b13bdb2-9077-40f0-a870-beed72325fdd-kube-api-access-jwpks\") on node \"crc\" DevicePath \"\"" Feb 23 10:44:46 crc kubenswrapper[4904]: I0223 10:44:46.577209 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w877m" Feb 23 10:44:46 crc kubenswrapper[4904]: I0223 10:44:46.624530 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w877m"] Feb 23 10:44:46 crc kubenswrapper[4904]: I0223 10:44:46.637509 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w877m"] Feb 23 10:44:47 crc kubenswrapper[4904]: I0223 10:44:47.274545 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b13bdb2-9077-40f0-a870-beed72325fdd" path="/var/lib/kubelet/pods/1b13bdb2-9077-40f0-a870-beed72325fdd/volumes" Feb 23 10:44:56 crc kubenswrapper[4904]: I0223 10:44:56.256797 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:44:56 crc kubenswrapper[4904]: E0223 10:44:56.258001 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:45:00 crc kubenswrapper[4904]: I0223 10:45:00.161291 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530725-rlc5q"] Feb 23 10:45:00 crc kubenswrapper[4904]: E0223 10:45:00.163582 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b13bdb2-9077-40f0-a870-beed72325fdd" containerName="registry-server" Feb 23 10:45:00 crc kubenswrapper[4904]: I0223 10:45:00.163691 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b13bdb2-9077-40f0-a870-beed72325fdd" containerName="registry-server" Feb 23 10:45:00 crc kubenswrapper[4904]: E0223 10:45:00.163834 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b13bdb2-9077-40f0-a870-beed72325fdd" containerName="extract-content" Feb 23 10:45:00 crc kubenswrapper[4904]: I0223 10:45:00.163912 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b13bdb2-9077-40f0-a870-beed72325fdd" containerName="extract-content" Feb 23 10:45:00 crc kubenswrapper[4904]: E0223 10:45:00.164003 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b13bdb2-9077-40f0-a870-beed72325fdd" containerName="extract-utilities" Feb 23 10:45:00 crc kubenswrapper[4904]: I0223 10:45:00.164091 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b13bdb2-9077-40f0-a870-beed72325fdd" containerName="extract-utilities" Feb 23 10:45:00 crc kubenswrapper[4904]: I0223 10:45:00.164408 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b13bdb2-9077-40f0-a870-beed72325fdd" containerName="registry-server" Feb 23 10:45:00 crc kubenswrapper[4904]: I0223 10:45:00.166430 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530725-rlc5q" Feb 23 10:45:00 crc kubenswrapper[4904]: I0223 10:45:00.169052 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 10:45:00 crc kubenswrapper[4904]: I0223 10:45:00.176437 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 10:45:00 crc kubenswrapper[4904]: I0223 10:45:00.182780 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530725-rlc5q"] Feb 23 10:45:00 crc kubenswrapper[4904]: I0223 10:45:00.231753 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59020053-fe67-410f-9d41-08609943074a-secret-volume\") pod \"collect-profiles-29530725-rlc5q\" (UID: \"59020053-fe67-410f-9d41-08609943074a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530725-rlc5q" Feb 23 10:45:00 crc kubenswrapper[4904]: I0223 10:45:00.231978 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz2qw\" (UniqueName: \"kubernetes.io/projected/59020053-fe67-410f-9d41-08609943074a-kube-api-access-sz2qw\") pod \"collect-profiles-29530725-rlc5q\" (UID: \"59020053-fe67-410f-9d41-08609943074a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530725-rlc5q" Feb 23 10:45:00 crc kubenswrapper[4904]: I0223 10:45:00.232047 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59020053-fe67-410f-9d41-08609943074a-config-volume\") pod \"collect-profiles-29530725-rlc5q\" (UID: \"59020053-fe67-410f-9d41-08609943074a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530725-rlc5q" Feb 23 10:45:00 crc kubenswrapper[4904]: I0223 10:45:00.334565 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz2qw\" (UniqueName: \"kubernetes.io/projected/59020053-fe67-410f-9d41-08609943074a-kube-api-access-sz2qw\") pod \"collect-profiles-29530725-rlc5q\" (UID: \"59020053-fe67-410f-9d41-08609943074a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530725-rlc5q" Feb 23 10:45:00 crc kubenswrapper[4904]: I0223 10:45:00.337112 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59020053-fe67-410f-9d41-08609943074a-config-volume\") pod \"collect-profiles-29530725-rlc5q\" (UID: \"59020053-fe67-410f-9d41-08609943074a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530725-rlc5q" Feb 23 10:45:00 crc kubenswrapper[4904]: I0223 10:45:00.335559 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59020053-fe67-410f-9d41-08609943074a-config-volume\") pod \"collect-profiles-29530725-rlc5q\" (UID: \"59020053-fe67-410f-9d41-08609943074a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530725-rlc5q" Feb 23 10:45:00 crc kubenswrapper[4904]: I0223 10:45:00.338179 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59020053-fe67-410f-9d41-08609943074a-secret-volume\") pod \"collect-profiles-29530725-rlc5q\" (UID: \"59020053-fe67-410f-9d41-08609943074a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530725-rlc5q" Feb 23 10:45:00 crc kubenswrapper[4904]: I0223 10:45:00.344674 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59020053-fe67-410f-9d41-08609943074a-secret-volume\") pod \"collect-profiles-29530725-rlc5q\" (UID: \"59020053-fe67-410f-9d41-08609943074a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530725-rlc5q" Feb 23 10:45:00 crc kubenswrapper[4904]: I0223 10:45:00.355506 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz2qw\" (UniqueName: \"kubernetes.io/projected/59020053-fe67-410f-9d41-08609943074a-kube-api-access-sz2qw\") pod \"collect-profiles-29530725-rlc5q\" (UID: \"59020053-fe67-410f-9d41-08609943074a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530725-rlc5q" Feb 23 10:45:00 crc kubenswrapper[4904]: I0223 10:45:00.503166 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530725-rlc5q" Feb 23 10:45:00 crc kubenswrapper[4904]: I0223 10:45:00.977432 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530725-rlc5q"] Feb 23 10:45:01 crc kubenswrapper[4904]: I0223 10:45:01.973121 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530725-rlc5q" event={"ID":"59020053-fe67-410f-9d41-08609943074a","Type":"ContainerStarted","Data":"2bdd0265d678d0660fcfb90519a61ade9b5fb40a572a3d91db3ad92df4d382d9"} Feb 23 10:45:01 crc kubenswrapper[4904]: I0223 10:45:01.973560 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530725-rlc5q" event={"ID":"59020053-fe67-410f-9d41-08609943074a","Type":"ContainerStarted","Data":"e2d539266f4f3d160564f4f731d1362f73a94fa396c1b92354c501246e66d26f"} Feb 23 10:45:02 crc kubenswrapper[4904]: I0223 10:45:02.003447 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29530725-rlc5q" podStartSLOduration=2.00342206 podStartE2EDuration="2.00342206s" podCreationTimestamp="2026-02-23 10:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:45:01.996518463 +0000 UTC m=+2335.416891986" watchObservedRunningTime="2026-02-23 10:45:02.00342206 +0000 UTC m=+2335.423795583" Feb 23 10:45:02 crc kubenswrapper[4904]: I0223 10:45:02.982528 4904 generic.go:334] "Generic (PLEG): container finished" podID="59020053-fe67-410f-9d41-08609943074a" containerID="2bdd0265d678d0660fcfb90519a61ade9b5fb40a572a3d91db3ad92df4d382d9" exitCode=0 Feb 23 10:45:02 crc kubenswrapper[4904]: I0223 10:45:02.982685 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530725-rlc5q" event={"ID":"59020053-fe67-410f-9d41-08609943074a","Type":"ContainerDied","Data":"2bdd0265d678d0660fcfb90519a61ade9b5fb40a572a3d91db3ad92df4d382d9"} Feb 23 10:45:04 crc kubenswrapper[4904]: I0223 10:45:04.363321 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530725-rlc5q" Feb 23 10:45:04 crc kubenswrapper[4904]: I0223 10:45:04.508069 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59020053-fe67-410f-9d41-08609943074a-config-volume\") pod \"59020053-fe67-410f-9d41-08609943074a\" (UID: \"59020053-fe67-410f-9d41-08609943074a\") " Feb 23 10:45:04 crc kubenswrapper[4904]: I0223 10:45:04.508425 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59020053-fe67-410f-9d41-08609943074a-secret-volume\") pod \"59020053-fe67-410f-9d41-08609943074a\" (UID: \"59020053-fe67-410f-9d41-08609943074a\") " Feb 23 10:45:04 crc kubenswrapper[4904]: I0223 10:45:04.508503 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz2qw\" (UniqueName: \"kubernetes.io/projected/59020053-fe67-410f-9d41-08609943074a-kube-api-access-sz2qw\") pod \"59020053-fe67-410f-9d41-08609943074a\" (UID: \"59020053-fe67-410f-9d41-08609943074a\") " Feb 23 10:45:04 crc kubenswrapper[4904]: I0223 10:45:04.508856 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59020053-fe67-410f-9d41-08609943074a-config-volume" (OuterVolumeSpecName: "config-volume") pod "59020053-fe67-410f-9d41-08609943074a" (UID: "59020053-fe67-410f-9d41-08609943074a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:45:04 crc kubenswrapper[4904]: I0223 10:45:04.509100 4904 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59020053-fe67-410f-9d41-08609943074a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 10:45:04 crc kubenswrapper[4904]: I0223 10:45:04.513349 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59020053-fe67-410f-9d41-08609943074a-kube-api-access-sz2qw" (OuterVolumeSpecName: "kube-api-access-sz2qw") pod "59020053-fe67-410f-9d41-08609943074a" (UID: "59020053-fe67-410f-9d41-08609943074a"). InnerVolumeSpecName "kube-api-access-sz2qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:45:04 crc kubenswrapper[4904]: I0223 10:45:04.516094 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59020053-fe67-410f-9d41-08609943074a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "59020053-fe67-410f-9d41-08609943074a" (UID: "59020053-fe67-410f-9d41-08609943074a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:45:04 crc kubenswrapper[4904]: I0223 10:45:04.610754 4904 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59020053-fe67-410f-9d41-08609943074a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 10:45:04 crc kubenswrapper[4904]: I0223 10:45:04.610800 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz2qw\" (UniqueName: \"kubernetes.io/projected/59020053-fe67-410f-9d41-08609943074a-kube-api-access-sz2qw\") on node \"crc\" DevicePath \"\"" Feb 23 10:45:05 crc kubenswrapper[4904]: I0223 10:45:05.005034 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530725-rlc5q" event={"ID":"59020053-fe67-410f-9d41-08609943074a","Type":"ContainerDied","Data":"e2d539266f4f3d160564f4f731d1362f73a94fa396c1b92354c501246e66d26f"} Feb 23 10:45:05 crc kubenswrapper[4904]: I0223 10:45:05.005287 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2d539266f4f3d160564f4f731d1362f73a94fa396c1b92354c501246e66d26f" Feb 23 10:45:05 crc kubenswrapper[4904]: I0223 10:45:05.005113 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530725-rlc5q" Feb 23 10:45:05 crc kubenswrapper[4904]: I0223 10:45:05.080465 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530680-dwm5z"] Feb 23 10:45:05 crc kubenswrapper[4904]: I0223 10:45:05.091550 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530680-dwm5z"] Feb 23 10:45:05 crc kubenswrapper[4904]: I0223 10:45:05.277944 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2c1c227-0297-4ba0-9acb-4690cffd0554" path="/var/lib/kubelet/pods/d2c1c227-0297-4ba0-9acb-4690cffd0554/volumes" Feb 23 10:45:06 crc kubenswrapper[4904]: I0223 10:45:06.016367 4904 generic.go:334] "Generic (PLEG): container finished" podID="7c4a4b83-33c3-418e-a2b7-ad52490fc88a" containerID="49b4683d7b146d3d21447a00fcc47c6b21c8b821414ba2db0ebfb0771c682723" exitCode=0 Feb 23 10:45:06 crc kubenswrapper[4904]: I0223 10:45:06.016479 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb" event={"ID":"7c4a4b83-33c3-418e-a2b7-ad52490fc88a","Type":"ContainerDied","Data":"49b4683d7b146d3d21447a00fcc47c6b21c8b821414ba2db0ebfb0771c682723"} Feb 23 10:45:07 crc kubenswrapper[4904]: I0223 10:45:07.653734 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb" Feb 23 10:45:07 crc kubenswrapper[4904]: I0223 10:45:07.810022 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-libvirt-secret-0\") pod \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\" (UID: \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\") " Feb 23 10:45:07 crc kubenswrapper[4904]: I0223 10:45:07.810167 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-ssh-key-openstack-edpm-ipam\") pod \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\" (UID: \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\") " Feb 23 10:45:07 crc kubenswrapper[4904]: I0223 10:45:07.810268 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-libvirt-combined-ca-bundle\") pod \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\" (UID: \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\") " Feb 23 10:45:07 crc kubenswrapper[4904]: I0223 10:45:07.810353 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxv5t\" (UniqueName: \"kubernetes.io/projected/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-kube-api-access-bxv5t\") pod \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\" (UID: \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\") " Feb 23 10:45:07 crc kubenswrapper[4904]: I0223 10:45:07.810418 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-inventory\") pod \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\" (UID: \"7c4a4b83-33c3-418e-a2b7-ad52490fc88a\") " Feb 23 10:45:07 crc kubenswrapper[4904]: I0223 10:45:07.816837 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "7c4a4b83-33c3-418e-a2b7-ad52490fc88a" (UID: "7c4a4b83-33c3-418e-a2b7-ad52490fc88a"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:45:07 crc kubenswrapper[4904]: I0223 10:45:07.825388 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-kube-api-access-bxv5t" (OuterVolumeSpecName: "kube-api-access-bxv5t") pod "7c4a4b83-33c3-418e-a2b7-ad52490fc88a" (UID: "7c4a4b83-33c3-418e-a2b7-ad52490fc88a"). InnerVolumeSpecName "kube-api-access-bxv5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:45:07 crc kubenswrapper[4904]: I0223 10:45:07.848251 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-inventory" (OuterVolumeSpecName: "inventory") pod "7c4a4b83-33c3-418e-a2b7-ad52490fc88a" (UID: "7c4a4b83-33c3-418e-a2b7-ad52490fc88a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:45:07 crc kubenswrapper[4904]: I0223 10:45:07.866692 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "7c4a4b83-33c3-418e-a2b7-ad52490fc88a" (UID: "7c4a4b83-33c3-418e-a2b7-ad52490fc88a"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:45:07 crc kubenswrapper[4904]: I0223 10:45:07.866843 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7c4a4b83-33c3-418e-a2b7-ad52490fc88a" (UID: "7c4a4b83-33c3-418e-a2b7-ad52490fc88a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:45:07 crc kubenswrapper[4904]: I0223 10:45:07.913301 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxv5t\" (UniqueName: \"kubernetes.io/projected/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-kube-api-access-bxv5t\") on node \"crc\" DevicePath \"\"" Feb 23 10:45:07 crc kubenswrapper[4904]: I0223 10:45:07.913356 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 10:45:07 crc kubenswrapper[4904]: I0223 10:45:07.913376 4904 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 23 10:45:07 crc kubenswrapper[4904]: I0223 10:45:07.913393 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 10:45:07 crc kubenswrapper[4904]: I0223 10:45:07.913411 4904 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c4a4b83-33c3-418e-a2b7-ad52490fc88a-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.052435 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb" event={"ID":"7c4a4b83-33c3-418e-a2b7-ad52490fc88a","Type":"ContainerDied","Data":"b9a3f5936fc3c26e1167ecb1e72b188479832dbed058234971970c1ac8d0e2ec"} Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.053017 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9a3f5936fc3c26e1167ecb1e72b188479832dbed058234971970c1ac8d0e2ec" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.052517 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.165712 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb"] Feb 23 10:45:08 crc kubenswrapper[4904]: E0223 10:45:08.166315 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59020053-fe67-410f-9d41-08609943074a" containerName="collect-profiles" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.166342 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="59020053-fe67-410f-9d41-08609943074a" containerName="collect-profiles" Feb 23 10:45:08 crc kubenswrapper[4904]: E0223 10:45:08.166358 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c4a4b83-33c3-418e-a2b7-ad52490fc88a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.166368 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c4a4b83-33c3-418e-a2b7-ad52490fc88a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.166584 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="59020053-fe67-410f-9d41-08609943074a" containerName="collect-profiles" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.166612 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c4a4b83-33c3-418e-a2b7-ad52490fc88a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.167539 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.170466 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.170566 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-c72bm" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.170626 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.170642 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.170809 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.174485 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.177605 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.182967 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb"] Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.322014 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdpln\" (UniqueName: \"kubernetes.io/projected/97a52242-6885-47ca-8ee9-6f11cdadad18-kube-api-access-mdpln\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.322132 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.322605 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.322678 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.322942 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.323123 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.323252 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.323372 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.325803 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.428102 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.428222 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.428319 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.428388 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.428457 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.428517 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.428613 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.428802 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdpln\" (UniqueName: \"kubernetes.io/projected/97a52242-6885-47ca-8ee9-6f11cdadad18-kube-api-access-mdpln\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.429049 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.429366 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.433674 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.434792 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.435487 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.436584 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.436938 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.437249 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.438291 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.460224 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdpln\" (UniqueName: \"kubernetes.io/projected/97a52242-6885-47ca-8ee9-6f11cdadad18-kube-api-access-mdpln\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vqkfb\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:08 crc kubenswrapper[4904]: I0223 10:45:08.493677 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:45:09 crc kubenswrapper[4904]: I0223 10:45:09.167245 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb"] Feb 23 10:45:09 crc kubenswrapper[4904]: I0223 10:45:09.255935 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:45:09 crc kubenswrapper[4904]: E0223 10:45:09.256662 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:45:10 crc kubenswrapper[4904]: I0223 10:45:10.074647 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" event={"ID":"97a52242-6885-47ca-8ee9-6f11cdadad18","Type":"ContainerStarted","Data":"39707600da0baaec26a374afc4ad56956c9ebf6d33c76edda5cc39d77f33feab"} Feb 23 10:45:10 crc kubenswrapper[4904]: I0223 10:45:10.076109 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" event={"ID":"97a52242-6885-47ca-8ee9-6f11cdadad18","Type":"ContainerStarted","Data":"814c5a096e1d62416411eefb96c9ba70baae31c10c0577e5d5ad98086d3aec69"} Feb 23 10:45:10 crc kubenswrapper[4904]: I0223 10:45:10.108244 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" podStartSLOduration=1.534121994 podStartE2EDuration="2.108228977s" podCreationTimestamp="2026-02-23 10:45:08 +0000 UTC" firstStartedPulling="2026-02-23 10:45:09.170680074 +0000 UTC m=+2342.591053597" lastFinishedPulling="2026-02-23 10:45:09.744787057 +0000 UTC m=+2343.165160580" observedRunningTime="2026-02-23 10:45:10.101291639 +0000 UTC m=+2343.521665162" watchObservedRunningTime="2026-02-23 10:45:10.108228977 +0000 UTC m=+2343.528602490" Feb 23 10:45:24 crc kubenswrapper[4904]: I0223 10:45:24.256870 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:45:24 crc kubenswrapper[4904]: E0223 10:45:24.258345 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:45:38 crc kubenswrapper[4904]: I0223 10:45:38.255854 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:45:38 crc kubenswrapper[4904]: E0223 10:45:38.256642 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:45:46 crc kubenswrapper[4904]: I0223 10:45:46.029996 4904 scope.go:117] "RemoveContainer" containerID="cfd0e62e772ceb74e5ce46e5b9578441a9e996885a7994906337a9dcac3275fc" Feb 23 10:45:53 crc kubenswrapper[4904]: I0223 10:45:53.256543 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:45:53 crc kubenswrapper[4904]: E0223 10:45:53.259348 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:46:08 crc kubenswrapper[4904]: I0223 10:46:08.256254 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:46:08 crc kubenswrapper[4904]: E0223 10:46:08.257523 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:46:19 crc kubenswrapper[4904]: I0223 10:46:19.258427 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:46:19 crc kubenswrapper[4904]: E0223 10:46:19.261322 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:46:30 crc kubenswrapper[4904]: I0223 10:46:30.256972 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:46:30 crc kubenswrapper[4904]: E0223 10:46:30.258321 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:46:44 crc kubenswrapper[4904]: I0223 10:46:44.255786 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:46:44 crc kubenswrapper[4904]: E0223 10:46:44.256986 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:46:56 crc kubenswrapper[4904]: I0223 10:46:56.255966 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:46:56 crc kubenswrapper[4904]: E0223 10:46:56.256961 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:47:02 crc kubenswrapper[4904]: I0223 10:47:02.285090 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q4dz8"] Feb 23 10:47:02 crc kubenswrapper[4904]: I0223 10:47:02.288455 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q4dz8" Feb 23 10:47:02 crc kubenswrapper[4904]: I0223 10:47:02.302088 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4dz8"] Feb 23 10:47:02 crc kubenswrapper[4904]: I0223 10:47:02.471115 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a62e8abe-330e-450e-b9fd-4568b9dc2a8f-catalog-content\") pod \"redhat-marketplace-q4dz8\" (UID: \"a62e8abe-330e-450e-b9fd-4568b9dc2a8f\") " pod="openshift-marketplace/redhat-marketplace-q4dz8" Feb 23 10:47:02 crc kubenswrapper[4904]: I0223 10:47:02.472463 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a62e8abe-330e-450e-b9fd-4568b9dc2a8f-utilities\") pod \"redhat-marketplace-q4dz8\" (UID: \"a62e8abe-330e-450e-b9fd-4568b9dc2a8f\") " pod="openshift-marketplace/redhat-marketplace-q4dz8" Feb 23 10:47:02 crc kubenswrapper[4904]: I0223 10:47:02.472513 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjwl4\" (UniqueName: \"kubernetes.io/projected/a62e8abe-330e-450e-b9fd-4568b9dc2a8f-kube-api-access-cjwl4\") pod \"redhat-marketplace-q4dz8\" (UID: \"a62e8abe-330e-450e-b9fd-4568b9dc2a8f\") " pod="openshift-marketplace/redhat-marketplace-q4dz8" Feb 23 10:47:02 crc kubenswrapper[4904]: I0223 10:47:02.573947 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a62e8abe-330e-450e-b9fd-4568b9dc2a8f-utilities\") pod \"redhat-marketplace-q4dz8\" (UID: \"a62e8abe-330e-450e-b9fd-4568b9dc2a8f\") " pod="openshift-marketplace/redhat-marketplace-q4dz8" Feb 23 10:47:02 crc kubenswrapper[4904]: I0223 10:47:02.573994 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjwl4\" (UniqueName: \"kubernetes.io/projected/a62e8abe-330e-450e-b9fd-4568b9dc2a8f-kube-api-access-cjwl4\") pod \"redhat-marketplace-q4dz8\" (UID: \"a62e8abe-330e-450e-b9fd-4568b9dc2a8f\") " pod="openshift-marketplace/redhat-marketplace-q4dz8" Feb 23 10:47:02 crc kubenswrapper[4904]: I0223 10:47:02.574661 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a62e8abe-330e-450e-b9fd-4568b9dc2a8f-catalog-content\") pod \"redhat-marketplace-q4dz8\" (UID: \"a62e8abe-330e-450e-b9fd-4568b9dc2a8f\") " pod="openshift-marketplace/redhat-marketplace-q4dz8" Feb 23 10:47:02 crc kubenswrapper[4904]: I0223 10:47:02.574668 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a62e8abe-330e-450e-b9fd-4568b9dc2a8f-utilities\") pod \"redhat-marketplace-q4dz8\" (UID: \"a62e8abe-330e-450e-b9fd-4568b9dc2a8f\") " pod="openshift-marketplace/redhat-marketplace-q4dz8" Feb 23 10:47:02 crc kubenswrapper[4904]: I0223 10:47:02.575126 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a62e8abe-330e-450e-b9fd-4568b9dc2a8f-catalog-content\") pod \"redhat-marketplace-q4dz8\" (UID: \"a62e8abe-330e-450e-b9fd-4568b9dc2a8f\") " pod="openshift-marketplace/redhat-marketplace-q4dz8" Feb 23 10:47:02 crc kubenswrapper[4904]: I0223 10:47:02.594478 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjwl4\" (UniqueName: \"kubernetes.io/projected/a62e8abe-330e-450e-b9fd-4568b9dc2a8f-kube-api-access-cjwl4\") pod \"redhat-marketplace-q4dz8\" (UID: \"a62e8abe-330e-450e-b9fd-4568b9dc2a8f\") " pod="openshift-marketplace/redhat-marketplace-q4dz8" Feb 23 10:47:02 crc kubenswrapper[4904]: I0223 10:47:02.658732 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q4dz8" Feb 23 10:47:03 crc kubenswrapper[4904]: I0223 10:47:03.150120 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4dz8"] Feb 23 10:47:03 crc kubenswrapper[4904]: I0223 10:47:03.358943 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4dz8" event={"ID":"a62e8abe-330e-450e-b9fd-4568b9dc2a8f","Type":"ContainerStarted","Data":"335aa409aead0ced682fc0af45efc17986dcf6d6e1f45ce3d3815d2d8f5195b0"} Feb 23 10:47:04 crc kubenswrapper[4904]: I0223 10:47:04.369197 4904 generic.go:334] "Generic (PLEG): container finished" podID="a62e8abe-330e-450e-b9fd-4568b9dc2a8f" containerID="29c64c8ddffeec7d0342ca5d51d70abfbbd361ed84161d8c5421b4c681a47783" exitCode=0 Feb 23 10:47:04 crc kubenswrapper[4904]: I0223 10:47:04.369295 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4dz8" event={"ID":"a62e8abe-330e-450e-b9fd-4568b9dc2a8f","Type":"ContainerDied","Data":"29c64c8ddffeec7d0342ca5d51d70abfbbd361ed84161d8c5421b4c681a47783"} Feb 23 10:47:05 crc kubenswrapper[4904]: I0223 10:47:05.383060 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4dz8" event={"ID":"a62e8abe-330e-450e-b9fd-4568b9dc2a8f","Type":"ContainerStarted","Data":"4550f11d55048ea3029484da14c49f801c64651262e70d09091520a61516b4aa"} Feb 23 10:47:07 crc kubenswrapper[4904]: I0223 10:47:07.406030 4904 generic.go:334] "Generic (PLEG): container finished" podID="a62e8abe-330e-450e-b9fd-4568b9dc2a8f" containerID="4550f11d55048ea3029484da14c49f801c64651262e70d09091520a61516b4aa" exitCode=0 Feb 23 10:47:07 crc kubenswrapper[4904]: I0223 10:47:07.406084 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4dz8" event={"ID":"a62e8abe-330e-450e-b9fd-4568b9dc2a8f","Type":"ContainerDied","Data":"4550f11d55048ea3029484da14c49f801c64651262e70d09091520a61516b4aa"} Feb 23 10:47:08 crc kubenswrapper[4904]: I0223 10:47:08.418547 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4dz8" event={"ID":"a62e8abe-330e-450e-b9fd-4568b9dc2a8f","Type":"ContainerStarted","Data":"8774771985ccd7b739e8224ba3eebcf3b53e514bb63f1ce764ba0b62bfaa909e"} Feb 23 10:47:08 crc kubenswrapper[4904]: I0223 10:47:08.445030 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q4dz8" podStartSLOduration=3.013091813 podStartE2EDuration="6.445013058s" podCreationTimestamp="2026-02-23 10:47:02 +0000 UTC" firstStartedPulling="2026-02-23 10:47:04.373564808 +0000 UTC m=+2457.793938321" lastFinishedPulling="2026-02-23 10:47:07.805486043 +0000 UTC m=+2461.225859566" observedRunningTime="2026-02-23 10:47:08.435062854 +0000 UTC m=+2461.855436377" watchObservedRunningTime="2026-02-23 10:47:08.445013058 +0000 UTC m=+2461.865386561" Feb 23 10:47:11 crc kubenswrapper[4904]: I0223 10:47:11.255192 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:47:11 crc kubenswrapper[4904]: E0223 10:47:11.256075 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:47:12 crc kubenswrapper[4904]: I0223 10:47:12.659856 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q4dz8" Feb 23 10:47:12 crc kubenswrapper[4904]: I0223 10:47:12.660402 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q4dz8" Feb 23 10:47:12 crc kubenswrapper[4904]: I0223 10:47:12.729213 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q4dz8" Feb 23 10:47:13 crc kubenswrapper[4904]: I0223 10:47:13.540008 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q4dz8" Feb 23 10:47:13 crc kubenswrapper[4904]: I0223 10:47:13.608812 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4dz8"] Feb 23 10:47:15 crc kubenswrapper[4904]: I0223 10:47:15.485145 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q4dz8" podUID="a62e8abe-330e-450e-b9fd-4568b9dc2a8f" containerName="registry-server" containerID="cri-o://8774771985ccd7b739e8224ba3eebcf3b53e514bb63f1ce764ba0b62bfaa909e" gracePeriod=2 Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.041661 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q4dz8" Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.161700 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a62e8abe-330e-450e-b9fd-4568b9dc2a8f-catalog-content\") pod \"a62e8abe-330e-450e-b9fd-4568b9dc2a8f\" (UID: \"a62e8abe-330e-450e-b9fd-4568b9dc2a8f\") " Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.161875 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjwl4\" (UniqueName: \"kubernetes.io/projected/a62e8abe-330e-450e-b9fd-4568b9dc2a8f-kube-api-access-cjwl4\") pod \"a62e8abe-330e-450e-b9fd-4568b9dc2a8f\" (UID: \"a62e8abe-330e-450e-b9fd-4568b9dc2a8f\") " Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.161987 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a62e8abe-330e-450e-b9fd-4568b9dc2a8f-utilities\") pod \"a62e8abe-330e-450e-b9fd-4568b9dc2a8f\" (UID: \"a62e8abe-330e-450e-b9fd-4568b9dc2a8f\") " Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.163091 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a62e8abe-330e-450e-b9fd-4568b9dc2a8f-utilities" (OuterVolumeSpecName: "utilities") pod "a62e8abe-330e-450e-b9fd-4568b9dc2a8f" (UID: "a62e8abe-330e-450e-b9fd-4568b9dc2a8f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.179140 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a62e8abe-330e-450e-b9fd-4568b9dc2a8f-kube-api-access-cjwl4" (OuterVolumeSpecName: "kube-api-access-cjwl4") pod "a62e8abe-330e-450e-b9fd-4568b9dc2a8f" (UID: "a62e8abe-330e-450e-b9fd-4568b9dc2a8f"). InnerVolumeSpecName "kube-api-access-cjwl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.204316 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a62e8abe-330e-450e-b9fd-4568b9dc2a8f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a62e8abe-330e-450e-b9fd-4568b9dc2a8f" (UID: "a62e8abe-330e-450e-b9fd-4568b9dc2a8f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.265186 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a62e8abe-330e-450e-b9fd-4568b9dc2a8f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.265854 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjwl4\" (UniqueName: \"kubernetes.io/projected/a62e8abe-330e-450e-b9fd-4568b9dc2a8f-kube-api-access-cjwl4\") on node \"crc\" DevicePath \"\"" Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.266021 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a62e8abe-330e-450e-b9fd-4568b9dc2a8f-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.500812 4904 generic.go:334] "Generic (PLEG): container finished" podID="a62e8abe-330e-450e-b9fd-4568b9dc2a8f" containerID="8774771985ccd7b739e8224ba3eebcf3b53e514bb63f1ce764ba0b62bfaa909e" exitCode=0 Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.500874 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4dz8" event={"ID":"a62e8abe-330e-450e-b9fd-4568b9dc2a8f","Type":"ContainerDied","Data":"8774771985ccd7b739e8224ba3eebcf3b53e514bb63f1ce764ba0b62bfaa909e"} Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.502065 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4dz8" event={"ID":"a62e8abe-330e-450e-b9fd-4568b9dc2a8f","Type":"ContainerDied","Data":"335aa409aead0ced682fc0af45efc17986dcf6d6e1f45ce3d3815d2d8f5195b0"} Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.502104 4904 scope.go:117] "RemoveContainer" containerID="8774771985ccd7b739e8224ba3eebcf3b53e514bb63f1ce764ba0b62bfaa909e" Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.500975 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q4dz8" Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.549884 4904 scope.go:117] "RemoveContainer" containerID="4550f11d55048ea3029484da14c49f801c64651262e70d09091520a61516b4aa" Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.560626 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4dz8"] Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.577671 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4dz8"] Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.586446 4904 scope.go:117] "RemoveContainer" containerID="29c64c8ddffeec7d0342ca5d51d70abfbbd361ed84161d8c5421b4c681a47783" Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.612260 4904 scope.go:117] "RemoveContainer" containerID="8774771985ccd7b739e8224ba3eebcf3b53e514bb63f1ce764ba0b62bfaa909e" Feb 23 10:47:16 crc kubenswrapper[4904]: E0223 10:47:16.612725 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8774771985ccd7b739e8224ba3eebcf3b53e514bb63f1ce764ba0b62bfaa909e\": container with ID starting with 8774771985ccd7b739e8224ba3eebcf3b53e514bb63f1ce764ba0b62bfaa909e not found: ID does not exist" containerID="8774771985ccd7b739e8224ba3eebcf3b53e514bb63f1ce764ba0b62bfaa909e" Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.612753 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8774771985ccd7b739e8224ba3eebcf3b53e514bb63f1ce764ba0b62bfaa909e"} err="failed to get container status \"8774771985ccd7b739e8224ba3eebcf3b53e514bb63f1ce764ba0b62bfaa909e\": rpc error: code = NotFound desc = could not find container \"8774771985ccd7b739e8224ba3eebcf3b53e514bb63f1ce764ba0b62bfaa909e\": container with ID starting with 8774771985ccd7b739e8224ba3eebcf3b53e514bb63f1ce764ba0b62bfaa909e not found: ID does not exist" Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.612771 4904 scope.go:117] "RemoveContainer" containerID="4550f11d55048ea3029484da14c49f801c64651262e70d09091520a61516b4aa" Feb 23 10:47:16 crc kubenswrapper[4904]: E0223 10:47:16.613213 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4550f11d55048ea3029484da14c49f801c64651262e70d09091520a61516b4aa\": container with ID starting with 4550f11d55048ea3029484da14c49f801c64651262e70d09091520a61516b4aa not found: ID does not exist" containerID="4550f11d55048ea3029484da14c49f801c64651262e70d09091520a61516b4aa" Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.613273 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4550f11d55048ea3029484da14c49f801c64651262e70d09091520a61516b4aa"} err="failed to get container status \"4550f11d55048ea3029484da14c49f801c64651262e70d09091520a61516b4aa\": rpc error: code = NotFound desc = could not find container \"4550f11d55048ea3029484da14c49f801c64651262e70d09091520a61516b4aa\": container with ID starting with 4550f11d55048ea3029484da14c49f801c64651262e70d09091520a61516b4aa not found: ID does not exist" Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.613315 4904 scope.go:117] "RemoveContainer" containerID="29c64c8ddffeec7d0342ca5d51d70abfbbd361ed84161d8c5421b4c681a47783" Feb 23 10:47:16 crc kubenswrapper[4904]: E0223 10:47:16.613867 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29c64c8ddffeec7d0342ca5d51d70abfbbd361ed84161d8c5421b4c681a47783\": container with ID starting with 29c64c8ddffeec7d0342ca5d51d70abfbbd361ed84161d8c5421b4c681a47783 not found: ID does not exist" containerID="29c64c8ddffeec7d0342ca5d51d70abfbbd361ed84161d8c5421b4c681a47783" Feb 23 10:47:16 crc kubenswrapper[4904]: I0223 10:47:16.613899 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29c64c8ddffeec7d0342ca5d51d70abfbbd361ed84161d8c5421b4c681a47783"} err="failed to get container status \"29c64c8ddffeec7d0342ca5d51d70abfbbd361ed84161d8c5421b4c681a47783\": rpc error: code = NotFound desc = could not find container \"29c64c8ddffeec7d0342ca5d51d70abfbbd361ed84161d8c5421b4c681a47783\": container with ID starting with 29c64c8ddffeec7d0342ca5d51d70abfbbd361ed84161d8c5421b4c681a47783 not found: ID does not exist" Feb 23 10:47:17 crc kubenswrapper[4904]: I0223 10:47:17.277257 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a62e8abe-330e-450e-b9fd-4568b9dc2a8f" path="/var/lib/kubelet/pods/a62e8abe-330e-450e-b9fd-4568b9dc2a8f/volumes" Feb 23 10:47:22 crc kubenswrapper[4904]: I0223 10:47:22.256013 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:47:22 crc kubenswrapper[4904]: E0223 10:47:22.256810 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:47:34 crc kubenswrapper[4904]: I0223 10:47:34.255729 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:47:34 crc kubenswrapper[4904]: E0223 10:47:34.256661 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:47:42 crc kubenswrapper[4904]: I0223 10:47:42.799928 4904 generic.go:334] "Generic (PLEG): container finished" podID="97a52242-6885-47ca-8ee9-6f11cdadad18" containerID="39707600da0baaec26a374afc4ad56956c9ebf6d33c76edda5cc39d77f33feab" exitCode=0 Feb 23 10:47:42 crc kubenswrapper[4904]: I0223 10:47:42.800011 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" event={"ID":"97a52242-6885-47ca-8ee9-6f11cdadad18","Type":"ContainerDied","Data":"39707600da0baaec26a374afc4ad56956c9ebf6d33c76edda5cc39d77f33feab"} Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.281752 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.438888 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdpln\" (UniqueName: \"kubernetes.io/projected/97a52242-6885-47ca-8ee9-6f11cdadad18-kube-api-access-mdpln\") pod \"97a52242-6885-47ca-8ee9-6f11cdadad18\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.439036 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-cell1-compute-config-0\") pod \"97a52242-6885-47ca-8ee9-6f11cdadad18\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.439128 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-ssh-key-openstack-edpm-ipam\") pod \"97a52242-6885-47ca-8ee9-6f11cdadad18\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.439154 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-cell1-compute-config-1\") pod \"97a52242-6885-47ca-8ee9-6f11cdadad18\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.439202 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-migration-ssh-key-0\") pod \"97a52242-6885-47ca-8ee9-6f11cdadad18\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.439233 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-extra-config-0\") pod \"97a52242-6885-47ca-8ee9-6f11cdadad18\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.439265 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-combined-ca-bundle\") pod \"97a52242-6885-47ca-8ee9-6f11cdadad18\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.439290 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-inventory\") pod \"97a52242-6885-47ca-8ee9-6f11cdadad18\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.439332 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-migration-ssh-key-1\") pod \"97a52242-6885-47ca-8ee9-6f11cdadad18\" (UID: \"97a52242-6885-47ca-8ee9-6f11cdadad18\") " Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.445260 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "97a52242-6885-47ca-8ee9-6f11cdadad18" (UID: "97a52242-6885-47ca-8ee9-6f11cdadad18"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.446930 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a52242-6885-47ca-8ee9-6f11cdadad18-kube-api-access-mdpln" (OuterVolumeSpecName: "kube-api-access-mdpln") pod "97a52242-6885-47ca-8ee9-6f11cdadad18" (UID: "97a52242-6885-47ca-8ee9-6f11cdadad18"). InnerVolumeSpecName "kube-api-access-mdpln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.467850 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "97a52242-6885-47ca-8ee9-6f11cdadad18" (UID: "97a52242-6885-47ca-8ee9-6f11cdadad18"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.469344 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "97a52242-6885-47ca-8ee9-6f11cdadad18" (UID: "97a52242-6885-47ca-8ee9-6f11cdadad18"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.469774 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "97a52242-6885-47ca-8ee9-6f11cdadad18" (UID: "97a52242-6885-47ca-8ee9-6f11cdadad18"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.472215 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-inventory" (OuterVolumeSpecName: "inventory") pod "97a52242-6885-47ca-8ee9-6f11cdadad18" (UID: "97a52242-6885-47ca-8ee9-6f11cdadad18"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.478013 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "97a52242-6885-47ca-8ee9-6f11cdadad18" (UID: "97a52242-6885-47ca-8ee9-6f11cdadad18"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.484532 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "97a52242-6885-47ca-8ee9-6f11cdadad18" (UID: "97a52242-6885-47ca-8ee9-6f11cdadad18"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.490445 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "97a52242-6885-47ca-8ee9-6f11cdadad18" (UID: "97a52242-6885-47ca-8ee9-6f11cdadad18"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.541736 4904 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.541767 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.541777 4904 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.541786 4904 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.541796 4904 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.541804 4904 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.541815 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.541823 4904 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/97a52242-6885-47ca-8ee9-6f11cdadad18-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.541832 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdpln\" (UniqueName: \"kubernetes.io/projected/97a52242-6885-47ca-8ee9-6f11cdadad18-kube-api-access-mdpln\") on node \"crc\" DevicePath \"\"" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.843621 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" event={"ID":"97a52242-6885-47ca-8ee9-6f11cdadad18","Type":"ContainerDied","Data":"814c5a096e1d62416411eefb96c9ba70baae31c10c0577e5d5ad98086d3aec69"} Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.843996 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="814c5a096e1d62416411eefb96c9ba70baae31c10c0577e5d5ad98086d3aec69" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.843662 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vqkfb" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.957322 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2"] Feb 23 10:47:44 crc kubenswrapper[4904]: E0223 10:47:44.957897 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62e8abe-330e-450e-b9fd-4568b9dc2a8f" containerName="extract-utilities" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.957922 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62e8abe-330e-450e-b9fd-4568b9dc2a8f" containerName="extract-utilities" Feb 23 10:47:44 crc kubenswrapper[4904]: E0223 10:47:44.957940 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62e8abe-330e-450e-b9fd-4568b9dc2a8f" containerName="extract-content" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.957948 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62e8abe-330e-450e-b9fd-4568b9dc2a8f" containerName="extract-content" Feb 23 10:47:44 crc kubenswrapper[4904]: E0223 10:47:44.957983 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62e8abe-330e-450e-b9fd-4568b9dc2a8f" containerName="registry-server" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.957991 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62e8abe-330e-450e-b9fd-4568b9dc2a8f" containerName="registry-server" Feb 23 10:47:44 crc kubenswrapper[4904]: E0223 10:47:44.958007 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a52242-6885-47ca-8ee9-6f11cdadad18" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.958015 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a52242-6885-47ca-8ee9-6f11cdadad18" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.958220 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62e8abe-330e-450e-b9fd-4568b9dc2a8f" containerName="registry-server" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.958248 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="97a52242-6885-47ca-8ee9-6f11cdadad18" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.959108 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.960978 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.961367 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.961444 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.961984 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-c72bm" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.962679 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 10:47:44 crc kubenswrapper[4904]: I0223 10:47:44.982410 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2"] Feb 23 10:47:45 crc kubenswrapper[4904]: I0223 10:47:45.051598 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:47:45 crc kubenswrapper[4904]: I0223 10:47:45.051705 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:47:45 crc kubenswrapper[4904]: I0223 10:47:45.051932 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:47:45 crc kubenswrapper[4904]: I0223 10:47:45.052064 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:47:45 crc kubenswrapper[4904]: I0223 10:47:45.052168 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:47:45 crc kubenswrapper[4904]: I0223 10:47:45.052257 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mqrk\" (UniqueName: \"kubernetes.io/projected/e3551d06-5e0e-4a2d-886f-9a617433cfcd-kube-api-access-7mqrk\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:47:45 crc kubenswrapper[4904]: I0223 10:47:45.052285 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:47:45 crc kubenswrapper[4904]: I0223 10:47:45.154005 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:47:45 crc kubenswrapper[4904]: I0223 10:47:45.154081 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:47:45 crc kubenswrapper[4904]: I0223 10:47:45.154130 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:47:45 crc kubenswrapper[4904]: I0223 10:47:45.154177 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mqrk\" (UniqueName: \"kubernetes.io/projected/e3551d06-5e0e-4a2d-886f-9a617433cfcd-kube-api-access-7mqrk\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:47:45 crc kubenswrapper[4904]: I0223 10:47:45.154199 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:47:45 crc kubenswrapper[4904]: I0223 10:47:45.154250 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:47:45 crc kubenswrapper[4904]: I0223 10:47:45.154459 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:47:45 crc kubenswrapper[4904]: I0223 10:47:45.158730 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:47:45 crc kubenswrapper[4904]: I0223 10:47:45.159233 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:47:45 crc kubenswrapper[4904]: I0223 10:47:45.159766 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:47:45 crc kubenswrapper[4904]: I0223 10:47:45.159904 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:47:45 crc kubenswrapper[4904]: I0223 10:47:45.166589 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:47:45 crc kubenswrapper[4904]: I0223 10:47:45.167421 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:47:45 crc kubenswrapper[4904]: I0223 10:47:45.184081 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mqrk\" (UniqueName: \"kubernetes.io/projected/e3551d06-5e0e-4a2d-886f-9a617433cfcd-kube-api-access-7mqrk\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:47:45 crc kubenswrapper[4904]: I0223 10:47:45.275176 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:47:45 crc kubenswrapper[4904]: I0223 10:47:45.803417 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2"] Feb 23 10:47:45 crc kubenswrapper[4904]: I0223 10:47:45.852338 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" event={"ID":"e3551d06-5e0e-4a2d-886f-9a617433cfcd","Type":"ContainerStarted","Data":"138c60341bdd93e9972d151b8b88956dbbde9aa82bf4e83a994a859927a7a301"} Feb 23 10:47:47 crc kubenswrapper[4904]: I0223 10:47:47.911490 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" event={"ID":"e3551d06-5e0e-4a2d-886f-9a617433cfcd","Type":"ContainerStarted","Data":"8220f628b0384dc77a8a090c2000522cecfa8b01d91c8145cae1fe63f701a73b"} Feb 23 10:47:47 crc kubenswrapper[4904]: I0223 10:47:47.946038 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" podStartSLOduration=3.130453471 podStartE2EDuration="3.946016294s" podCreationTimestamp="2026-02-23 10:47:44 +0000 UTC" firstStartedPulling="2026-02-23 10:47:45.814262346 +0000 UTC m=+2499.234635859" lastFinishedPulling="2026-02-23 10:47:46.629825139 +0000 UTC m=+2500.050198682" observedRunningTime="2026-02-23 10:47:47.942491573 +0000 UTC m=+2501.362865126" watchObservedRunningTime="2026-02-23 10:47:47.946016294 +0000 UTC m=+2501.366389807" Feb 23 10:47:48 crc kubenswrapper[4904]: I0223 10:47:48.255620 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:47:48 crc kubenswrapper[4904]: E0223 10:47:48.256219 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:48:02 crc kubenswrapper[4904]: I0223 10:48:02.255277 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:48:02 crc kubenswrapper[4904]: E0223 10:48:02.256237 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:48:17 crc kubenswrapper[4904]: I0223 10:48:17.269818 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:48:17 crc kubenswrapper[4904]: E0223 10:48:17.270651 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:48:30 crc kubenswrapper[4904]: I0223 10:48:30.256178 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:48:30 crc kubenswrapper[4904]: E0223 10:48:30.259133 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:48:41 crc kubenswrapper[4904]: I0223 10:48:41.260048 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:48:41 crc kubenswrapper[4904]: E0223 10:48:41.261271 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:48:53 crc kubenswrapper[4904]: I0223 10:48:53.255659 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:48:53 crc kubenswrapper[4904]: E0223 10:48:53.256806 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:49:08 crc kubenswrapper[4904]: I0223 10:49:08.256094 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:49:08 crc kubenswrapper[4904]: E0223 10:49:08.257419 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:49:22 crc kubenswrapper[4904]: I0223 10:49:22.256464 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:49:22 crc kubenswrapper[4904]: I0223 10:49:22.976363 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerStarted","Data":"f18ec8c05d4821b3f51727c1f44896a7ad72660de5ec42d4493693f856351dbb"} Feb 23 10:49:46 crc kubenswrapper[4904]: I0223 10:49:46.241369 4904 generic.go:334] "Generic (PLEG): container finished" podID="e3551d06-5e0e-4a2d-886f-9a617433cfcd" containerID="8220f628b0384dc77a8a090c2000522cecfa8b01d91c8145cae1fe63f701a73b" exitCode=0 Feb 23 10:49:46 crc kubenswrapper[4904]: I0223 10:49:46.242005 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" event={"ID":"e3551d06-5e0e-4a2d-886f-9a617433cfcd","Type":"ContainerDied","Data":"8220f628b0384dc77a8a090c2000522cecfa8b01d91c8145cae1fe63f701a73b"} Feb 23 10:49:47 crc kubenswrapper[4904]: I0223 10:49:47.720888 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:49:47 crc kubenswrapper[4904]: I0223 10:49:47.828753 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-ssh-key-openstack-edpm-ipam\") pod \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " Feb 23 10:49:47 crc kubenswrapper[4904]: I0223 10:49:47.828869 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-inventory\") pod \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " Feb 23 10:49:47 crc kubenswrapper[4904]: I0223 10:49:47.828897 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-ceilometer-compute-config-data-1\") pod \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " Feb 23 10:49:47 crc kubenswrapper[4904]: I0223 10:49:47.828932 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-ceilometer-compute-config-data-2\") pod \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " Feb 23 10:49:47 crc kubenswrapper[4904]: I0223 10:49:47.829051 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-telemetry-combined-ca-bundle\") pod \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " Feb 23 10:49:47 crc kubenswrapper[4904]: I0223 10:49:47.829084 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mqrk\" (UniqueName: \"kubernetes.io/projected/e3551d06-5e0e-4a2d-886f-9a617433cfcd-kube-api-access-7mqrk\") pod \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " Feb 23 10:49:47 crc kubenswrapper[4904]: I0223 10:49:47.829140 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-ceilometer-compute-config-data-0\") pod \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\" (UID: \"e3551d06-5e0e-4a2d-886f-9a617433cfcd\") " Feb 23 10:49:47 crc kubenswrapper[4904]: I0223 10:49:47.834615 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3551d06-5e0e-4a2d-886f-9a617433cfcd-kube-api-access-7mqrk" (OuterVolumeSpecName: "kube-api-access-7mqrk") pod "e3551d06-5e0e-4a2d-886f-9a617433cfcd" (UID: "e3551d06-5e0e-4a2d-886f-9a617433cfcd"). InnerVolumeSpecName "kube-api-access-7mqrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:49:47 crc kubenswrapper[4904]: I0223 10:49:47.847077 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "e3551d06-5e0e-4a2d-886f-9a617433cfcd" (UID: "e3551d06-5e0e-4a2d-886f-9a617433cfcd"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:49:47 crc kubenswrapper[4904]: I0223 10:49:47.866634 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-inventory" (OuterVolumeSpecName: "inventory") pod "e3551d06-5e0e-4a2d-886f-9a617433cfcd" (UID: "e3551d06-5e0e-4a2d-886f-9a617433cfcd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:49:47 crc kubenswrapper[4904]: I0223 10:49:47.867564 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "e3551d06-5e0e-4a2d-886f-9a617433cfcd" (UID: "e3551d06-5e0e-4a2d-886f-9a617433cfcd"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:49:47 crc kubenswrapper[4904]: I0223 10:49:47.868278 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e3551d06-5e0e-4a2d-886f-9a617433cfcd" (UID: "e3551d06-5e0e-4a2d-886f-9a617433cfcd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:49:47 crc kubenswrapper[4904]: I0223 10:49:47.876955 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "e3551d06-5e0e-4a2d-886f-9a617433cfcd" (UID: "e3551d06-5e0e-4a2d-886f-9a617433cfcd"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:49:47 crc kubenswrapper[4904]: I0223 10:49:47.892616 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "e3551d06-5e0e-4a2d-886f-9a617433cfcd" (UID: "e3551d06-5e0e-4a2d-886f-9a617433cfcd"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:49:47 crc kubenswrapper[4904]: I0223 10:49:47.930636 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mqrk\" (UniqueName: \"kubernetes.io/projected/e3551d06-5e0e-4a2d-886f-9a617433cfcd-kube-api-access-7mqrk\") on node \"crc\" DevicePath \"\"" Feb 23 10:49:47 crc kubenswrapper[4904]: I0223 10:49:47.930666 4904 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 23 10:49:47 crc kubenswrapper[4904]: I0223 10:49:47.930676 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 10:49:47 crc kubenswrapper[4904]: I0223 10:49:47.930687 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 10:49:47 crc kubenswrapper[4904]: I0223 10:49:47.930698 4904 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 23 10:49:47 crc kubenswrapper[4904]: I0223 10:49:47.930706 4904 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 23 10:49:47 crc kubenswrapper[4904]: I0223 10:49:47.930727 4904 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3551d06-5e0e-4a2d-886f-9a617433cfcd-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:49:48 crc kubenswrapper[4904]: I0223 10:49:48.265350 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" event={"ID":"e3551d06-5e0e-4a2d-886f-9a617433cfcd","Type":"ContainerDied","Data":"138c60341bdd93e9972d151b8b88956dbbde9aa82bf4e83a994a859927a7a301"} Feb 23 10:49:48 crc kubenswrapper[4904]: I0223 10:49:48.265395 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="138c60341bdd93e9972d151b8b88956dbbde9aa82bf4e83a994a859927a7a301" Feb 23 10:49:48 crc kubenswrapper[4904]: I0223 10:49:48.265471 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2" Feb 23 10:50:29 crc kubenswrapper[4904]: I0223 10:50:29.689611 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 10:50:29 crc kubenswrapper[4904]: I0223 10:50:29.692779 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1c937d26-4043-491e-8d60-2ac2216169b6" containerName="prometheus" containerID="cri-o://96647140c6dd17d6322f7c14c7bcfcb6840b9849a7856424100c7345c067f6ca" gracePeriod=600 Feb 23 10:50:29 crc kubenswrapper[4904]: I0223 10:50:29.692895 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1c937d26-4043-491e-8d60-2ac2216169b6" containerName="config-reloader" containerID="cri-o://c198dd66ddc7e5ee4830171dad765fab5ea1988b9d0ee4ebd18f2781141f791e" gracePeriod=600 Feb 23 10:50:29 crc kubenswrapper[4904]: I0223 10:50:29.692871 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1c937d26-4043-491e-8d60-2ac2216169b6" containerName="thanos-sidecar" containerID="cri-o://f863103165774fefdba1d9b373e02e8b1778ce9fc42f128424fe5d91c260e4d3" gracePeriod=600 Feb 23 10:50:30 crc kubenswrapper[4904]: I0223 10:50:30.789115 4904 generic.go:334] "Generic (PLEG): container finished" podID="1c937d26-4043-491e-8d60-2ac2216169b6" containerID="f863103165774fefdba1d9b373e02e8b1778ce9fc42f128424fe5d91c260e4d3" exitCode=0 Feb 23 10:50:30 crc kubenswrapper[4904]: I0223 10:50:30.789515 4904 generic.go:334] "Generic (PLEG): container finished" podID="1c937d26-4043-491e-8d60-2ac2216169b6" containerID="c198dd66ddc7e5ee4830171dad765fab5ea1988b9d0ee4ebd18f2781141f791e" exitCode=0 Feb 23 10:50:30 crc kubenswrapper[4904]: I0223 10:50:30.789536 4904 generic.go:334] "Generic (PLEG): container finished" podID="1c937d26-4043-491e-8d60-2ac2216169b6" containerID="96647140c6dd17d6322f7c14c7bcfcb6840b9849a7856424100c7345c067f6ca" exitCode=0 Feb 23 10:50:30 crc kubenswrapper[4904]: I0223 10:50:30.789187 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1c937d26-4043-491e-8d60-2ac2216169b6","Type":"ContainerDied","Data":"f863103165774fefdba1d9b373e02e8b1778ce9fc42f128424fe5d91c260e4d3"} Feb 23 10:50:30 crc kubenswrapper[4904]: I0223 10:50:30.789587 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1c937d26-4043-491e-8d60-2ac2216169b6","Type":"ContainerDied","Data":"c198dd66ddc7e5ee4830171dad765fab5ea1988b9d0ee4ebd18f2781141f791e"} Feb 23 10:50:30 crc kubenswrapper[4904]: I0223 10:50:30.789609 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1c937d26-4043-491e-8d60-2ac2216169b6","Type":"ContainerDied","Data":"96647140c6dd17d6322f7c14c7bcfcb6840b9849a7856424100c7345c067f6ca"} Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.409584 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.546307 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\") pod \"1c937d26-4043-491e-8d60-2ac2216169b6\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.546429 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1c937d26-4043-491e-8d60-2ac2216169b6-prometheus-metric-storage-rulefiles-2\") pod \"1c937d26-4043-491e-8d60-2ac2216169b6\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.546463 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1c937d26-4043-491e-8d60-2ac2216169b6-prometheus-metric-storage-rulefiles-0\") pod \"1c937d26-4043-491e-8d60-2ac2216169b6\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.546535 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1c937d26-4043-491e-8d60-2ac2216169b6-config-out\") pod \"1c937d26-4043-491e-8d60-2ac2216169b6\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.546564 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-config\") pod \"1c937d26-4043-491e-8d60-2ac2216169b6\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.546621 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-thanos-prometheus-http-client-file\") pod \"1c937d26-4043-491e-8d60-2ac2216169b6\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.546644 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1c937d26-4043-491e-8d60-2ac2216169b6-tls-assets\") pod \"1c937d26-4043-491e-8d60-2ac2216169b6\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.546667 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"1c937d26-4043-491e-8d60-2ac2216169b6\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.546684 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"1c937d26-4043-491e-8d60-2ac2216169b6\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.546728 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkfwt\" (UniqueName: \"kubernetes.io/projected/1c937d26-4043-491e-8d60-2ac2216169b6-kube-api-access-tkfwt\") pod \"1c937d26-4043-491e-8d60-2ac2216169b6\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.546751 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-secret-combined-ca-bundle\") pod \"1c937d26-4043-491e-8d60-2ac2216169b6\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.546799 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-web-config\") pod \"1c937d26-4043-491e-8d60-2ac2216169b6\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.546832 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1c937d26-4043-491e-8d60-2ac2216169b6-prometheus-metric-storage-rulefiles-1\") pod \"1c937d26-4043-491e-8d60-2ac2216169b6\" (UID: \"1c937d26-4043-491e-8d60-2ac2216169b6\") " Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.547099 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c937d26-4043-491e-8d60-2ac2216169b6-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "1c937d26-4043-491e-8d60-2ac2216169b6" (UID: "1c937d26-4043-491e-8d60-2ac2216169b6"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.547390 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c937d26-4043-491e-8d60-2ac2216169b6-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "1c937d26-4043-491e-8d60-2ac2216169b6" (UID: "1c937d26-4043-491e-8d60-2ac2216169b6"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.547684 4904 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1c937d26-4043-491e-8d60-2ac2216169b6-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.547724 4904 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1c937d26-4043-491e-8d60-2ac2216169b6-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.548484 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c937d26-4043-491e-8d60-2ac2216169b6-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "1c937d26-4043-491e-8d60-2ac2216169b6" (UID: "1c937d26-4043-491e-8d60-2ac2216169b6"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.553723 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "1c937d26-4043-491e-8d60-2ac2216169b6" (UID: "1c937d26-4043-491e-8d60-2ac2216169b6"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.553802 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-config" (OuterVolumeSpecName: "config") pod "1c937d26-4043-491e-8d60-2ac2216169b6" (UID: "1c937d26-4043-491e-8d60-2ac2216169b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.553811 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "1c937d26-4043-491e-8d60-2ac2216169b6" (UID: "1c937d26-4043-491e-8d60-2ac2216169b6"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.553861 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "1c937d26-4043-491e-8d60-2ac2216169b6" (UID: "1c937d26-4043-491e-8d60-2ac2216169b6"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.556294 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c937d26-4043-491e-8d60-2ac2216169b6-config-out" (OuterVolumeSpecName: "config-out") pod "1c937d26-4043-491e-8d60-2ac2216169b6" (UID: "1c937d26-4043-491e-8d60-2ac2216169b6"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.558667 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c937d26-4043-491e-8d60-2ac2216169b6-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "1c937d26-4043-491e-8d60-2ac2216169b6" (UID: "1c937d26-4043-491e-8d60-2ac2216169b6"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.565911 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "1c937d26-4043-491e-8d60-2ac2216169b6" (UID: "1c937d26-4043-491e-8d60-2ac2216169b6"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.573013 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c937d26-4043-491e-8d60-2ac2216169b6-kube-api-access-tkfwt" (OuterVolumeSpecName: "kube-api-access-tkfwt") pod "1c937d26-4043-491e-8d60-2ac2216169b6" (UID: "1c937d26-4043-491e-8d60-2ac2216169b6"). InnerVolumeSpecName "kube-api-access-tkfwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.592534 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0698fc29-13b4-43aa-8032-06d87cf4b025" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "1c937d26-4043-491e-8d60-2ac2216169b6" (UID: "1c937d26-4043-491e-8d60-2ac2216169b6"). InnerVolumeSpecName "pvc-0698fc29-13b4-43aa-8032-06d87cf4b025". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.638478 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-web-config" (OuterVolumeSpecName: "web-config") pod "1c937d26-4043-491e-8d60-2ac2216169b6" (UID: "1c937d26-4043-491e-8d60-2ac2216169b6"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.650076 4904 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1c937d26-4043-491e-8d60-2ac2216169b6-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.650116 4904 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1c937d26-4043-491e-8d60-2ac2216169b6-config-out\") on node \"crc\" DevicePath \"\"" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.650130 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.650143 4904 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.650153 4904 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1c937d26-4043-491e-8d60-2ac2216169b6-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.650164 4904 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.650177 4904 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.650191 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkfwt\" (UniqueName: \"kubernetes.io/projected/1c937d26-4043-491e-8d60-2ac2216169b6-kube-api-access-tkfwt\") on node \"crc\" DevicePath \"\"" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.650202 4904 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.650212 4904 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1c937d26-4043-491e-8d60-2ac2216169b6-web-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.650257 4904 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\") on node \"crc\" " Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.677822 4904 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.677952 4904 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0698fc29-13b4-43aa-8032-06d87cf4b025" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0698fc29-13b4-43aa-8032-06d87cf4b025") on node "crc" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.752103 4904 reconciler_common.go:293] "Volume detached for volume \"pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\") on node \"crc\" DevicePath \"\"" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.800512 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1c937d26-4043-491e-8d60-2ac2216169b6","Type":"ContainerDied","Data":"a53a5175a1612d83cdfbc52c5b78adc2e044346d6e028509ac720bbd69a216b1"} Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.800564 4904 scope.go:117] "RemoveContainer" containerID="f863103165774fefdba1d9b373e02e8b1778ce9fc42f128424fe5d91c260e4d3" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.800606 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.829559 4904 scope.go:117] "RemoveContainer" containerID="c198dd66ddc7e5ee4830171dad765fab5ea1988b9d0ee4ebd18f2781141f791e" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.861805 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.871579 4904 scope.go:117] "RemoveContainer" containerID="96647140c6dd17d6322f7c14c7bcfcb6840b9849a7856424100c7345c067f6ca" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.872578 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.886252 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 10:50:31 crc kubenswrapper[4904]: E0223 10:50:31.886782 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c937d26-4043-491e-8d60-2ac2216169b6" containerName="init-config-reloader" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.886812 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c937d26-4043-491e-8d60-2ac2216169b6" containerName="init-config-reloader" Feb 23 10:50:31 crc kubenswrapper[4904]: E0223 10:50:31.886825 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c937d26-4043-491e-8d60-2ac2216169b6" containerName="prometheus" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.886832 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c937d26-4043-491e-8d60-2ac2216169b6" containerName="prometheus" Feb 23 10:50:31 crc kubenswrapper[4904]: E0223 10:50:31.886840 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3551d06-5e0e-4a2d-886f-9a617433cfcd" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.886847 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3551d06-5e0e-4a2d-886f-9a617433cfcd" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 23 10:50:31 crc kubenswrapper[4904]: E0223 10:50:31.886858 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c937d26-4043-491e-8d60-2ac2216169b6" containerName="config-reloader" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.886864 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c937d26-4043-491e-8d60-2ac2216169b6" containerName="config-reloader" Feb 23 10:50:31 crc kubenswrapper[4904]: E0223 10:50:31.886911 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c937d26-4043-491e-8d60-2ac2216169b6" containerName="thanos-sidecar" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.886917 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c937d26-4043-491e-8d60-2ac2216169b6" containerName="thanos-sidecar" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.887166 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c937d26-4043-491e-8d60-2ac2216169b6" containerName="config-reloader" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.887180 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c937d26-4043-491e-8d60-2ac2216169b6" containerName="prometheus" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.887208 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3551d06-5e0e-4a2d-886f-9a617433cfcd" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.887224 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c937d26-4043-491e-8d60-2ac2216169b6" containerName="thanos-sidecar" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.890179 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.902798 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.902837 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.902863 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.902988 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-c4rwz" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.903090 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.903109 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.910328 4904 scope.go:117] "RemoveContainer" containerID="5a489beb4de90f6dddc4bd2689ad37d8857caf8b7ac00e673b7777d4793ec722" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.905708 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.915492 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.925788 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.960516 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.960553 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.960579 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.960628 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.960668 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-config\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.960682 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.960750 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bprf\" (UniqueName: \"kubernetes.io/projected/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-kube-api-access-4bprf\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.960774 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.960797 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.960819 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.960839 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.960942 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:31 crc kubenswrapper[4904]: I0223 10:50:31.960978 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.062123 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.062349 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.062378 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.062432 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.062473 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-config\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.062491 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.062536 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bprf\" (UniqueName: \"kubernetes.io/projected/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-kube-api-access-4bprf\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.062561 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.062585 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.062611 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.062631 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.062654 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.062674 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.064106 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.064117 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.064146 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.065802 4904 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.065959 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/40819dc70d8445a50995e9a88cc270de788496e012ad6d3d513a831f13ec32aa/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.067893 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.068696 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.068807 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.069667 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.070133 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.070784 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-config\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.071468 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.071965 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.081974 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bprf\" (UniqueName: \"kubernetes.io/projected/203579f0-ffe4-42a4-91fd-b4a4340eb9fc-kube-api-access-4bprf\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.106645 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0698fc29-13b4-43aa-8032-06d87cf4b025\") pod \"prometheus-metric-storage-0\" (UID: \"203579f0-ffe4-42a4-91fd-b4a4340eb9fc\") " pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.230177 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.734973 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 23 10:50:32 crc kubenswrapper[4904]: I0223 10:50:32.810696 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"203579f0-ffe4-42a4-91fd-b4a4340eb9fc","Type":"ContainerStarted","Data":"480b4fed8110cd99a1c9b971733bc0e391afafdd655db36485d5737c5638ff38"} Feb 23 10:50:33 crc kubenswrapper[4904]: I0223 10:50:33.268160 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c937d26-4043-491e-8d60-2ac2216169b6" path="/var/lib/kubelet/pods/1c937d26-4043-491e-8d60-2ac2216169b6/volumes" Feb 23 10:50:34 crc kubenswrapper[4904]: I0223 10:50:34.216322 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="1c937d26-4043-491e-8d60-2ac2216169b6" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.132:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 10:50:36 crc kubenswrapper[4904]: I0223 10:50:36.862148 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"203579f0-ffe4-42a4-91fd-b4a4340eb9fc","Type":"ContainerStarted","Data":"1ac29fad619d918e0d8e2bb671a73feb63a1820329243434641ffe50ff021c5b"} Feb 23 10:50:46 crc kubenswrapper[4904]: I0223 10:50:46.244205 4904 scope.go:117] "RemoveContainer" containerID="54e2da8e8a1342724f16d77d2077e9445a1c89de943b15fa99efd437cf15906d" Feb 23 10:50:46 crc kubenswrapper[4904]: I0223 10:50:46.269113 4904 scope.go:117] "RemoveContainer" containerID="6fdc43d2cf903d8d0ed231c131cf59033f61ed64d4c0134be66db5eea0941492" Feb 23 10:50:46 crc kubenswrapper[4904]: I0223 10:50:46.292047 4904 scope.go:117] "RemoveContainer" containerID="b5afbe61ff6b97be19cc72cb191f2a9665134660302f480b8c8d1a5451b41f71" Feb 23 10:50:46 crc kubenswrapper[4904]: I0223 10:50:46.971589 4904 generic.go:334] "Generic (PLEG): container finished" podID="203579f0-ffe4-42a4-91fd-b4a4340eb9fc" containerID="1ac29fad619d918e0d8e2bb671a73feb63a1820329243434641ffe50ff021c5b" exitCode=0 Feb 23 10:50:46 crc kubenswrapper[4904]: I0223 10:50:46.971660 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"203579f0-ffe4-42a4-91fd-b4a4340eb9fc","Type":"ContainerDied","Data":"1ac29fad619d918e0d8e2bb671a73feb63a1820329243434641ffe50ff021c5b"} Feb 23 10:50:47 crc kubenswrapper[4904]: I0223 10:50:47.991159 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"203579f0-ffe4-42a4-91fd-b4a4340eb9fc","Type":"ContainerStarted","Data":"d7cc6ac4ad90b951884bcdac4e82321b017c97413d96ec003a5f440e00b79ab7"} Feb 23 10:50:53 crc kubenswrapper[4904]: I0223 10:50:53.066349 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"203579f0-ffe4-42a4-91fd-b4a4340eb9fc","Type":"ContainerStarted","Data":"87f46954c2e695dcbc587f62e970da1e65525866314e44e96007e7271650af43"} Feb 23 10:50:53 crc kubenswrapper[4904]: I0223 10:50:53.066653 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"203579f0-ffe4-42a4-91fd-b4a4340eb9fc","Type":"ContainerStarted","Data":"d29421ed1a4d5cec62c589725e9bf5457c0f6d07aad2d568cdaae4372dd50ab0"} Feb 23 10:50:53 crc kubenswrapper[4904]: I0223 10:50:53.113585 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=22.113560402 podStartE2EDuration="22.113560402s" podCreationTimestamp="2026-02-23 10:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:50:53.104130283 +0000 UTC m=+2686.524503796" watchObservedRunningTime="2026-02-23 10:50:53.113560402 +0000 UTC m=+2686.533933935" Feb 23 10:50:57 crc kubenswrapper[4904]: I0223 10:50:57.231047 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 23 10:51:02 crc kubenswrapper[4904]: I0223 10:51:02.232604 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 23 10:51:02 crc kubenswrapper[4904]: I0223 10:51:02.239752 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 23 10:51:03 crc kubenswrapper[4904]: I0223 10:51:03.178931 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.166301 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.168605 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.170513 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.170835 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.171044 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.171395 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-v878l" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.188662 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/463cdfc8-f595-4253-8fcf-4da5d843fcf8-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.188905 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/463cdfc8-f595-4253-8fcf-4da5d843fcf8-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.189460 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/463cdfc8-f595-4253-8fcf-4da5d843fcf8-config-data\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.193767 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.292070 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.292127 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/463cdfc8-f595-4253-8fcf-4da5d843fcf8-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.292234 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/463cdfc8-f595-4253-8fcf-4da5d843fcf8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.292269 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/463cdfc8-f595-4253-8fcf-4da5d843fcf8-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.292312 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x4kc\" (UniqueName: \"kubernetes.io/projected/463cdfc8-f595-4253-8fcf-4da5d843fcf8-kube-api-access-9x4kc\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.292364 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/463cdfc8-f595-4253-8fcf-4da5d843fcf8-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.292410 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/463cdfc8-f595-4253-8fcf-4da5d843fcf8-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.292451 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/463cdfc8-f595-4253-8fcf-4da5d843fcf8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.292475 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/463cdfc8-f595-4253-8fcf-4da5d843fcf8-config-data\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.294170 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/463cdfc8-f595-4253-8fcf-4da5d843fcf8-config-data\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.295332 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/463cdfc8-f595-4253-8fcf-4da5d843fcf8-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.307382 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/463cdfc8-f595-4253-8fcf-4da5d843fcf8-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.393953 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.393999 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/463cdfc8-f595-4253-8fcf-4da5d843fcf8-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.394065 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/463cdfc8-f595-4253-8fcf-4da5d843fcf8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.394083 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/463cdfc8-f595-4253-8fcf-4da5d843fcf8-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.394110 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x4kc\" (UniqueName: \"kubernetes.io/projected/463cdfc8-f595-4253-8fcf-4da5d843fcf8-kube-api-access-9x4kc\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.394184 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/463cdfc8-f595-4253-8fcf-4da5d843fcf8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.394796 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.398964 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/463cdfc8-f595-4253-8fcf-4da5d843fcf8-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.399331 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/463cdfc8-f595-4253-8fcf-4da5d843fcf8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.399680 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/463cdfc8-f595-4253-8fcf-4da5d843fcf8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.403810 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/463cdfc8-f595-4253-8fcf-4da5d843fcf8-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.436644 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x4kc\" (UniqueName: \"kubernetes.io/projected/463cdfc8-f595-4253-8fcf-4da5d843fcf8-kube-api-access-9x4kc\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.452200 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.492699 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.983625 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 23 10:51:25 crc kubenswrapper[4904]: I0223 10:51:25.999131 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 10:51:26 crc kubenswrapper[4904]: I0223 10:51:26.458392 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"463cdfc8-f595-4253-8fcf-4da5d843fcf8","Type":"ContainerStarted","Data":"3c6fdb7861c9ec2faae9fac7f244dcec3e6ad5df705b215c4307c716e60cb914"} Feb 23 10:51:38 crc kubenswrapper[4904]: I0223 10:51:38.586926 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"463cdfc8-f595-4253-8fcf-4da5d843fcf8","Type":"ContainerStarted","Data":"1acde66d21a46184c17918bca62648acb0a148bc9c736d993fe51828779742f2"} Feb 23 10:51:47 crc kubenswrapper[4904]: I0223 10:51:47.398543 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:51:47 crc kubenswrapper[4904]: I0223 10:51:47.399380 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:52:17 crc kubenswrapper[4904]: I0223 10:52:17.398683 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:52:17 crc kubenswrapper[4904]: I0223 10:52:17.399389 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:52:47 crc kubenswrapper[4904]: I0223 10:52:47.398948 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:52:47 crc kubenswrapper[4904]: I0223 10:52:47.400249 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:52:47 crc kubenswrapper[4904]: I0223 10:52:47.400411 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:52:47 crc kubenswrapper[4904]: I0223 10:52:47.401887 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f18ec8c05d4821b3f51727c1f44896a7ad72660de5ec42d4493693f856351dbb"} pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 10:52:47 crc kubenswrapper[4904]: I0223 10:52:47.402057 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" containerID="cri-o://f18ec8c05d4821b3f51727c1f44896a7ad72660de5ec42d4493693f856351dbb" gracePeriod=600 Feb 23 10:52:48 crc kubenswrapper[4904]: I0223 10:52:48.526444 4904 generic.go:334] "Generic (PLEG): container finished" podID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerID="f18ec8c05d4821b3f51727c1f44896a7ad72660de5ec42d4493693f856351dbb" exitCode=0 Feb 23 10:52:48 crc kubenswrapper[4904]: I0223 10:52:48.526515 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerDied","Data":"f18ec8c05d4821b3f51727c1f44896a7ad72660de5ec42d4493693f856351dbb"} Feb 23 10:52:48 crc kubenswrapper[4904]: I0223 10:52:48.526824 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerStarted","Data":"7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d"} Feb 23 10:52:48 crc kubenswrapper[4904]: I0223 10:52:48.526861 4904 scope.go:117] "RemoveContainer" containerID="c94d62a4cea6d0c65e9c839130db53a5d861d8ad7bd442f80ac110650369361d" Feb 23 10:52:48 crc kubenswrapper[4904]: I0223 10:52:48.551755 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=73.560222479 podStartE2EDuration="1m24.551733723s" podCreationTimestamp="2026-02-23 10:51:24 +0000 UTC" firstStartedPulling="2026-02-23 10:51:25.998878006 +0000 UTC m=+2719.419251519" lastFinishedPulling="2026-02-23 10:51:36.99038925 +0000 UTC m=+2730.410762763" observedRunningTime="2026-02-23 10:51:38.621020829 +0000 UTC m=+2732.041394372" watchObservedRunningTime="2026-02-23 10:52:48.551733723 +0000 UTC m=+2801.972107236" Feb 23 10:54:32 crc kubenswrapper[4904]: I0223 10:54:32.295620 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x4bc9"] Feb 23 10:54:32 crc kubenswrapper[4904]: I0223 10:54:32.301829 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x4bc9" Feb 23 10:54:32 crc kubenswrapper[4904]: I0223 10:54:32.311623 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x4bc9"] Feb 23 10:54:32 crc kubenswrapper[4904]: I0223 10:54:32.370417 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43398af5-b0bd-417b-9a74-620aca9a9fea-catalog-content\") pod \"certified-operators-x4bc9\" (UID: \"43398af5-b0bd-417b-9a74-620aca9a9fea\") " pod="openshift-marketplace/certified-operators-x4bc9" Feb 23 10:54:32 crc kubenswrapper[4904]: I0223 10:54:32.370610 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43398af5-b0bd-417b-9a74-620aca9a9fea-utilities\") pod \"certified-operators-x4bc9\" (UID: \"43398af5-b0bd-417b-9a74-620aca9a9fea\") " pod="openshift-marketplace/certified-operators-x4bc9" Feb 23 10:54:32 crc kubenswrapper[4904]: I0223 10:54:32.370973 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc7xh\" (UniqueName: \"kubernetes.io/projected/43398af5-b0bd-417b-9a74-620aca9a9fea-kube-api-access-sc7xh\") pod \"certified-operators-x4bc9\" (UID: \"43398af5-b0bd-417b-9a74-620aca9a9fea\") " pod="openshift-marketplace/certified-operators-x4bc9" Feb 23 10:54:32 crc kubenswrapper[4904]: I0223 10:54:32.473380 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc7xh\" (UniqueName: \"kubernetes.io/projected/43398af5-b0bd-417b-9a74-620aca9a9fea-kube-api-access-sc7xh\") pod \"certified-operators-x4bc9\" (UID: \"43398af5-b0bd-417b-9a74-620aca9a9fea\") " pod="openshift-marketplace/certified-operators-x4bc9" Feb 23 10:54:32 crc kubenswrapper[4904]: I0223 10:54:32.473513 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43398af5-b0bd-417b-9a74-620aca9a9fea-catalog-content\") pod \"certified-operators-x4bc9\" (UID: \"43398af5-b0bd-417b-9a74-620aca9a9fea\") " pod="openshift-marketplace/certified-operators-x4bc9" Feb 23 10:54:32 crc kubenswrapper[4904]: I0223 10:54:32.473619 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43398af5-b0bd-417b-9a74-620aca9a9fea-utilities\") pod \"certified-operators-x4bc9\" (UID: \"43398af5-b0bd-417b-9a74-620aca9a9fea\") " pod="openshift-marketplace/certified-operators-x4bc9" Feb 23 10:54:32 crc kubenswrapper[4904]: I0223 10:54:32.474277 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43398af5-b0bd-417b-9a74-620aca9a9fea-catalog-content\") pod \"certified-operators-x4bc9\" (UID: \"43398af5-b0bd-417b-9a74-620aca9a9fea\") " pod="openshift-marketplace/certified-operators-x4bc9" Feb 23 10:54:32 crc kubenswrapper[4904]: I0223 10:54:32.474476 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43398af5-b0bd-417b-9a74-620aca9a9fea-utilities\") pod \"certified-operators-x4bc9\" (UID: \"43398af5-b0bd-417b-9a74-620aca9a9fea\") " pod="openshift-marketplace/certified-operators-x4bc9" Feb 23 10:54:32 crc kubenswrapper[4904]: I0223 10:54:32.496138 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc7xh\" (UniqueName: \"kubernetes.io/projected/43398af5-b0bd-417b-9a74-620aca9a9fea-kube-api-access-sc7xh\") pod \"certified-operators-x4bc9\" (UID: \"43398af5-b0bd-417b-9a74-620aca9a9fea\") " pod="openshift-marketplace/certified-operators-x4bc9" Feb 23 10:54:32 crc kubenswrapper[4904]: I0223 10:54:32.638555 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x4bc9" Feb 23 10:54:33 crc kubenswrapper[4904]: I0223 10:54:33.050426 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x4bc9"] Feb 23 10:54:33 crc kubenswrapper[4904]: I0223 10:54:33.760886 4904 generic.go:334] "Generic (PLEG): container finished" podID="43398af5-b0bd-417b-9a74-620aca9a9fea" containerID="063065ee3a2e830625d4de070388aeba1566490215b1ca00f26bf26b343921e9" exitCode=0 Feb 23 10:54:33 crc kubenswrapper[4904]: I0223 10:54:33.760934 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4bc9" event={"ID":"43398af5-b0bd-417b-9a74-620aca9a9fea","Type":"ContainerDied","Data":"063065ee3a2e830625d4de070388aeba1566490215b1ca00f26bf26b343921e9"} Feb 23 10:54:33 crc kubenswrapper[4904]: I0223 10:54:33.760970 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4bc9" event={"ID":"43398af5-b0bd-417b-9a74-620aca9a9fea","Type":"ContainerStarted","Data":"7c1b7373d95fc2614d8ccfd280448eefa05c084e06ba3b81b0abc3eea8caf43e"} Feb 23 10:54:34 crc kubenswrapper[4904]: I0223 10:54:34.776885 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4bc9" event={"ID":"43398af5-b0bd-417b-9a74-620aca9a9fea","Type":"ContainerStarted","Data":"fb9193c8d6af647b17465c1928192bebec6e5b514206419a38ef467fe11127ac"} Feb 23 10:54:36 crc kubenswrapper[4904]: I0223 10:54:36.803882 4904 generic.go:334] "Generic (PLEG): container finished" podID="43398af5-b0bd-417b-9a74-620aca9a9fea" containerID="fb9193c8d6af647b17465c1928192bebec6e5b514206419a38ef467fe11127ac" exitCode=0 Feb 23 10:54:36 crc kubenswrapper[4904]: I0223 10:54:36.803978 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4bc9" event={"ID":"43398af5-b0bd-417b-9a74-620aca9a9fea","Type":"ContainerDied","Data":"fb9193c8d6af647b17465c1928192bebec6e5b514206419a38ef467fe11127ac"} Feb 23 10:54:37 crc kubenswrapper[4904]: I0223 10:54:37.818914 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4bc9" event={"ID":"43398af5-b0bd-417b-9a74-620aca9a9fea","Type":"ContainerStarted","Data":"e772b2bad68f90d6f7c2fe3e98d9d6f05b152bb4df6791959c387c123b46ecfd"} Feb 23 10:54:37 crc kubenswrapper[4904]: I0223 10:54:37.853794 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x4bc9" podStartSLOduration=2.356934147 podStartE2EDuration="5.853764867s" podCreationTimestamp="2026-02-23 10:54:32 +0000 UTC" firstStartedPulling="2026-02-23 10:54:33.762843036 +0000 UTC m=+2907.183216569" lastFinishedPulling="2026-02-23 10:54:37.259673776 +0000 UTC m=+2910.680047289" observedRunningTime="2026-02-23 10:54:37.842443655 +0000 UTC m=+2911.262817188" watchObservedRunningTime="2026-02-23 10:54:37.853764867 +0000 UTC m=+2911.274138390" Feb 23 10:54:42 crc kubenswrapper[4904]: I0223 10:54:42.639166 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x4bc9" Feb 23 10:54:42 crc kubenswrapper[4904]: I0223 10:54:42.639674 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x4bc9" Feb 23 10:54:42 crc kubenswrapper[4904]: I0223 10:54:42.700911 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x4bc9" Feb 23 10:54:42 crc kubenswrapper[4904]: I0223 10:54:42.957874 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x4bc9" Feb 23 10:54:43 crc kubenswrapper[4904]: I0223 10:54:43.018295 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x4bc9"] Feb 23 10:54:44 crc kubenswrapper[4904]: I0223 10:54:44.919970 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x4bc9" podUID="43398af5-b0bd-417b-9a74-620aca9a9fea" containerName="registry-server" containerID="cri-o://e772b2bad68f90d6f7c2fe3e98d9d6f05b152bb4df6791959c387c123b46ecfd" gracePeriod=2 Feb 23 10:54:45 crc kubenswrapper[4904]: I0223 10:54:45.443017 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x4bc9" Feb 23 10:54:45 crc kubenswrapper[4904]: I0223 10:54:45.587810 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43398af5-b0bd-417b-9a74-620aca9a9fea-utilities\") pod \"43398af5-b0bd-417b-9a74-620aca9a9fea\" (UID: \"43398af5-b0bd-417b-9a74-620aca9a9fea\") " Feb 23 10:54:45 crc kubenswrapper[4904]: I0223 10:54:45.587891 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43398af5-b0bd-417b-9a74-620aca9a9fea-catalog-content\") pod \"43398af5-b0bd-417b-9a74-620aca9a9fea\" (UID: \"43398af5-b0bd-417b-9a74-620aca9a9fea\") " Feb 23 10:54:45 crc kubenswrapper[4904]: I0223 10:54:45.588151 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc7xh\" (UniqueName: \"kubernetes.io/projected/43398af5-b0bd-417b-9a74-620aca9a9fea-kube-api-access-sc7xh\") pod \"43398af5-b0bd-417b-9a74-620aca9a9fea\" (UID: \"43398af5-b0bd-417b-9a74-620aca9a9fea\") " Feb 23 10:54:45 crc kubenswrapper[4904]: I0223 10:54:45.588566 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43398af5-b0bd-417b-9a74-620aca9a9fea-utilities" (OuterVolumeSpecName: "utilities") pod "43398af5-b0bd-417b-9a74-620aca9a9fea" (UID: "43398af5-b0bd-417b-9a74-620aca9a9fea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:54:45 crc kubenswrapper[4904]: I0223 10:54:45.589264 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43398af5-b0bd-417b-9a74-620aca9a9fea-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:54:45 crc kubenswrapper[4904]: I0223 10:54:45.595490 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43398af5-b0bd-417b-9a74-620aca9a9fea-kube-api-access-sc7xh" (OuterVolumeSpecName: "kube-api-access-sc7xh") pod "43398af5-b0bd-417b-9a74-620aca9a9fea" (UID: "43398af5-b0bd-417b-9a74-620aca9a9fea"). InnerVolumeSpecName "kube-api-access-sc7xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:54:45 crc kubenswrapper[4904]: I0223 10:54:45.643157 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43398af5-b0bd-417b-9a74-620aca9a9fea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43398af5-b0bd-417b-9a74-620aca9a9fea" (UID: "43398af5-b0bd-417b-9a74-620aca9a9fea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:54:45 crc kubenswrapper[4904]: I0223 10:54:45.691512 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43398af5-b0bd-417b-9a74-620aca9a9fea-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:54:45 crc kubenswrapper[4904]: I0223 10:54:45.691549 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc7xh\" (UniqueName: \"kubernetes.io/projected/43398af5-b0bd-417b-9a74-620aca9a9fea-kube-api-access-sc7xh\") on node \"crc\" DevicePath \"\"" Feb 23 10:54:45 crc kubenswrapper[4904]: I0223 10:54:45.937655 4904 generic.go:334] "Generic (PLEG): container finished" podID="43398af5-b0bd-417b-9a74-620aca9a9fea" containerID="e772b2bad68f90d6f7c2fe3e98d9d6f05b152bb4df6791959c387c123b46ecfd" exitCode=0 Feb 23 10:54:45 crc kubenswrapper[4904]: I0223 10:54:45.937692 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4bc9" event={"ID":"43398af5-b0bd-417b-9a74-620aca9a9fea","Type":"ContainerDied","Data":"e772b2bad68f90d6f7c2fe3e98d9d6f05b152bb4df6791959c387c123b46ecfd"} Feb 23 10:54:45 crc kubenswrapper[4904]: I0223 10:54:45.937734 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4bc9" event={"ID":"43398af5-b0bd-417b-9a74-620aca9a9fea","Type":"ContainerDied","Data":"7c1b7373d95fc2614d8ccfd280448eefa05c084e06ba3b81b0abc3eea8caf43e"} Feb 23 10:54:45 crc kubenswrapper[4904]: I0223 10:54:45.937751 4904 scope.go:117] "RemoveContainer" containerID="e772b2bad68f90d6f7c2fe3e98d9d6f05b152bb4df6791959c387c123b46ecfd" Feb 23 10:54:45 crc kubenswrapper[4904]: I0223 10:54:45.937787 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x4bc9" Feb 23 10:54:45 crc kubenswrapper[4904]: I0223 10:54:45.981802 4904 scope.go:117] "RemoveContainer" containerID="fb9193c8d6af647b17465c1928192bebec6e5b514206419a38ef467fe11127ac" Feb 23 10:54:45 crc kubenswrapper[4904]: I0223 10:54:45.987078 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x4bc9"] Feb 23 10:54:46 crc kubenswrapper[4904]: I0223 10:54:46.001499 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x4bc9"] Feb 23 10:54:46 crc kubenswrapper[4904]: I0223 10:54:46.012065 4904 scope.go:117] "RemoveContainer" containerID="063065ee3a2e830625d4de070388aeba1566490215b1ca00f26bf26b343921e9" Feb 23 10:54:46 crc kubenswrapper[4904]: I0223 10:54:46.060618 4904 scope.go:117] "RemoveContainer" containerID="e772b2bad68f90d6f7c2fe3e98d9d6f05b152bb4df6791959c387c123b46ecfd" Feb 23 10:54:46 crc kubenswrapper[4904]: E0223 10:54:46.061393 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e772b2bad68f90d6f7c2fe3e98d9d6f05b152bb4df6791959c387c123b46ecfd\": container with ID starting with e772b2bad68f90d6f7c2fe3e98d9d6f05b152bb4df6791959c387c123b46ecfd not found: ID does not exist" containerID="e772b2bad68f90d6f7c2fe3e98d9d6f05b152bb4df6791959c387c123b46ecfd" Feb 23 10:54:46 crc kubenswrapper[4904]: I0223 10:54:46.061492 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e772b2bad68f90d6f7c2fe3e98d9d6f05b152bb4df6791959c387c123b46ecfd"} err="failed to get container status \"e772b2bad68f90d6f7c2fe3e98d9d6f05b152bb4df6791959c387c123b46ecfd\": rpc error: code = NotFound desc = could not find container \"e772b2bad68f90d6f7c2fe3e98d9d6f05b152bb4df6791959c387c123b46ecfd\": container with ID starting with e772b2bad68f90d6f7c2fe3e98d9d6f05b152bb4df6791959c387c123b46ecfd not found: ID does not exist" Feb 23 10:54:46 crc kubenswrapper[4904]: I0223 10:54:46.061598 4904 scope.go:117] "RemoveContainer" containerID="fb9193c8d6af647b17465c1928192bebec6e5b514206419a38ef467fe11127ac" Feb 23 10:54:46 crc kubenswrapper[4904]: E0223 10:54:46.061992 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb9193c8d6af647b17465c1928192bebec6e5b514206419a38ef467fe11127ac\": container with ID starting with fb9193c8d6af647b17465c1928192bebec6e5b514206419a38ef467fe11127ac not found: ID does not exist" containerID="fb9193c8d6af647b17465c1928192bebec6e5b514206419a38ef467fe11127ac" Feb 23 10:54:46 crc kubenswrapper[4904]: I0223 10:54:46.062031 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb9193c8d6af647b17465c1928192bebec6e5b514206419a38ef467fe11127ac"} err="failed to get container status \"fb9193c8d6af647b17465c1928192bebec6e5b514206419a38ef467fe11127ac\": rpc error: code = NotFound desc = could not find container \"fb9193c8d6af647b17465c1928192bebec6e5b514206419a38ef467fe11127ac\": container with ID starting with fb9193c8d6af647b17465c1928192bebec6e5b514206419a38ef467fe11127ac not found: ID does not exist" Feb 23 10:54:46 crc kubenswrapper[4904]: I0223 10:54:46.062056 4904 scope.go:117] "RemoveContainer" containerID="063065ee3a2e830625d4de070388aeba1566490215b1ca00f26bf26b343921e9" Feb 23 10:54:46 crc kubenswrapper[4904]: E0223 10:54:46.062271 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"063065ee3a2e830625d4de070388aeba1566490215b1ca00f26bf26b343921e9\": container with ID starting with 063065ee3a2e830625d4de070388aeba1566490215b1ca00f26bf26b343921e9 not found: ID does not exist" containerID="063065ee3a2e830625d4de070388aeba1566490215b1ca00f26bf26b343921e9" Feb 23 10:54:46 crc kubenswrapper[4904]: I0223 10:54:46.062297 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"063065ee3a2e830625d4de070388aeba1566490215b1ca00f26bf26b343921e9"} err="failed to get container status \"063065ee3a2e830625d4de070388aeba1566490215b1ca00f26bf26b343921e9\": rpc error: code = NotFound desc = could not find container \"063065ee3a2e830625d4de070388aeba1566490215b1ca00f26bf26b343921e9\": container with ID starting with 063065ee3a2e830625d4de070388aeba1566490215b1ca00f26bf26b343921e9 not found: ID does not exist" Feb 23 10:54:47 crc kubenswrapper[4904]: I0223 10:54:47.272700 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43398af5-b0bd-417b-9a74-620aca9a9fea" path="/var/lib/kubelet/pods/43398af5-b0bd-417b-9a74-620aca9a9fea/volumes" Feb 23 10:54:47 crc kubenswrapper[4904]: I0223 10:54:47.398832 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:54:47 crc kubenswrapper[4904]: I0223 10:54:47.398906 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:55:17 crc kubenswrapper[4904]: I0223 10:55:17.397653 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:55:17 crc kubenswrapper[4904]: I0223 10:55:17.398204 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:55:47 crc kubenswrapper[4904]: I0223 10:55:47.398014 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:55:47 crc kubenswrapper[4904]: I0223 10:55:47.399122 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:55:47 crc kubenswrapper[4904]: I0223 10:55:47.399200 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 10:55:47 crc kubenswrapper[4904]: I0223 10:55:47.400492 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d"} pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 10:55:47 crc kubenswrapper[4904]: I0223 10:55:47.400605 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" containerID="cri-o://7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" gracePeriod=600 Feb 23 10:55:47 crc kubenswrapper[4904]: E0223 10:55:47.524589 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:55:47 crc kubenswrapper[4904]: I0223 10:55:47.943531 4904 generic.go:334] "Generic (PLEG): container finished" podID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" exitCode=0 Feb 23 10:55:47 crc kubenswrapper[4904]: I0223 10:55:47.943608 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerDied","Data":"7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d"} Feb 23 10:55:47 crc kubenswrapper[4904]: I0223 10:55:47.943900 4904 scope.go:117] "RemoveContainer" containerID="f18ec8c05d4821b3f51727c1f44896a7ad72660de5ec42d4493693f856351dbb" Feb 23 10:55:47 crc kubenswrapper[4904]: I0223 10:55:47.944771 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 10:55:47 crc kubenswrapper[4904]: E0223 10:55:47.945304 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:55:55 crc kubenswrapper[4904]: I0223 10:55:55.608588 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t7qxv"] Feb 23 10:55:55 crc kubenswrapper[4904]: E0223 10:55:55.609600 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43398af5-b0bd-417b-9a74-620aca9a9fea" containerName="extract-content" Feb 23 10:55:55 crc kubenswrapper[4904]: I0223 10:55:55.609616 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="43398af5-b0bd-417b-9a74-620aca9a9fea" containerName="extract-content" Feb 23 10:55:55 crc kubenswrapper[4904]: E0223 10:55:55.609647 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43398af5-b0bd-417b-9a74-620aca9a9fea" containerName="extract-utilities" Feb 23 10:55:55 crc kubenswrapper[4904]: I0223 10:55:55.609655 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="43398af5-b0bd-417b-9a74-620aca9a9fea" containerName="extract-utilities" Feb 23 10:55:55 crc kubenswrapper[4904]: E0223 10:55:55.609685 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43398af5-b0bd-417b-9a74-620aca9a9fea" containerName="registry-server" Feb 23 10:55:55 crc kubenswrapper[4904]: I0223 10:55:55.609693 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="43398af5-b0bd-417b-9a74-620aca9a9fea" containerName="registry-server" Feb 23 10:55:55 crc kubenswrapper[4904]: I0223 10:55:55.609928 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="43398af5-b0bd-417b-9a74-620aca9a9fea" containerName="registry-server" Feb 23 10:55:55 crc kubenswrapper[4904]: I0223 10:55:55.611608 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t7qxv" Feb 23 10:55:55 crc kubenswrapper[4904]: I0223 10:55:55.642975 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t7qxv"] Feb 23 10:55:55 crc kubenswrapper[4904]: I0223 10:55:55.742884 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103-utilities\") pod \"community-operators-t7qxv\" (UID: \"2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103\") " pod="openshift-marketplace/community-operators-t7qxv" Feb 23 10:55:55 crc kubenswrapper[4904]: I0223 10:55:55.743246 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gczgn\" (UniqueName: \"kubernetes.io/projected/2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103-kube-api-access-gczgn\") pod \"community-operators-t7qxv\" (UID: \"2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103\") " pod="openshift-marketplace/community-operators-t7qxv" Feb 23 10:55:55 crc kubenswrapper[4904]: I0223 10:55:55.743291 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103-catalog-content\") pod \"community-operators-t7qxv\" (UID: \"2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103\") " pod="openshift-marketplace/community-operators-t7qxv" Feb 23 10:55:55 crc kubenswrapper[4904]: I0223 10:55:55.844769 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gczgn\" (UniqueName: \"kubernetes.io/projected/2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103-kube-api-access-gczgn\") pod \"community-operators-t7qxv\" (UID: \"2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103\") " pod="openshift-marketplace/community-operators-t7qxv" Feb 23 10:55:55 crc kubenswrapper[4904]: I0223 10:55:55.844815 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103-catalog-content\") pod \"community-operators-t7qxv\" (UID: \"2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103\") " pod="openshift-marketplace/community-operators-t7qxv" Feb 23 10:55:55 crc kubenswrapper[4904]: I0223 10:55:55.845291 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103-catalog-content\") pod \"community-operators-t7qxv\" (UID: \"2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103\") " pod="openshift-marketplace/community-operators-t7qxv" Feb 23 10:55:55 crc kubenswrapper[4904]: I0223 10:55:55.845365 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103-utilities\") pod \"community-operators-t7qxv\" (UID: \"2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103\") " pod="openshift-marketplace/community-operators-t7qxv" Feb 23 10:55:55 crc kubenswrapper[4904]: I0223 10:55:55.845592 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103-utilities\") pod \"community-operators-t7qxv\" (UID: \"2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103\") " pod="openshift-marketplace/community-operators-t7qxv" Feb 23 10:55:55 crc kubenswrapper[4904]: I0223 10:55:55.865041 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gczgn\" (UniqueName: \"kubernetes.io/projected/2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103-kube-api-access-gczgn\") pod \"community-operators-t7qxv\" (UID: \"2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103\") " pod="openshift-marketplace/community-operators-t7qxv" Feb 23 10:55:55 crc kubenswrapper[4904]: I0223 10:55:55.935854 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t7qxv" Feb 23 10:55:56 crc kubenswrapper[4904]: I0223 10:55:56.446611 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t7qxv"] Feb 23 10:55:56 crc kubenswrapper[4904]: I0223 10:55:56.607330 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-22m5x"] Feb 23 10:55:56 crc kubenswrapper[4904]: I0223 10:55:56.609480 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-22m5x" Feb 23 10:55:56 crc kubenswrapper[4904]: I0223 10:55:56.630529 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-22m5x"] Feb 23 10:55:56 crc kubenswrapper[4904]: I0223 10:55:56.764823 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkmrx\" (UniqueName: \"kubernetes.io/projected/60cd78aa-7882-4c27-ac99-13f7a5c9e8fb-kube-api-access-kkmrx\") pod \"redhat-operators-22m5x\" (UID: \"60cd78aa-7882-4c27-ac99-13f7a5c9e8fb\") " pod="openshift-marketplace/redhat-operators-22m5x" Feb 23 10:55:56 crc kubenswrapper[4904]: I0223 10:55:56.764904 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60cd78aa-7882-4c27-ac99-13f7a5c9e8fb-catalog-content\") pod \"redhat-operators-22m5x\" (UID: \"60cd78aa-7882-4c27-ac99-13f7a5c9e8fb\") " pod="openshift-marketplace/redhat-operators-22m5x" Feb 23 10:55:56 crc kubenswrapper[4904]: I0223 10:55:56.764944 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60cd78aa-7882-4c27-ac99-13f7a5c9e8fb-utilities\") pod \"redhat-operators-22m5x\" (UID: \"60cd78aa-7882-4c27-ac99-13f7a5c9e8fb\") " pod="openshift-marketplace/redhat-operators-22m5x" Feb 23 10:55:56 crc kubenswrapper[4904]: I0223 10:55:56.866784 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60cd78aa-7882-4c27-ac99-13f7a5c9e8fb-utilities\") pod \"redhat-operators-22m5x\" (UID: \"60cd78aa-7882-4c27-ac99-13f7a5c9e8fb\") " pod="openshift-marketplace/redhat-operators-22m5x" Feb 23 10:55:56 crc kubenswrapper[4904]: I0223 10:55:56.866934 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkmrx\" (UniqueName: \"kubernetes.io/projected/60cd78aa-7882-4c27-ac99-13f7a5c9e8fb-kube-api-access-kkmrx\") pod \"redhat-operators-22m5x\" (UID: \"60cd78aa-7882-4c27-ac99-13f7a5c9e8fb\") " pod="openshift-marketplace/redhat-operators-22m5x" Feb 23 10:55:56 crc kubenswrapper[4904]: I0223 10:55:56.866992 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60cd78aa-7882-4c27-ac99-13f7a5c9e8fb-catalog-content\") pod \"redhat-operators-22m5x\" (UID: \"60cd78aa-7882-4c27-ac99-13f7a5c9e8fb\") " pod="openshift-marketplace/redhat-operators-22m5x" Feb 23 10:55:56 crc kubenswrapper[4904]: I0223 10:55:56.867498 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60cd78aa-7882-4c27-ac99-13f7a5c9e8fb-catalog-content\") pod \"redhat-operators-22m5x\" (UID: \"60cd78aa-7882-4c27-ac99-13f7a5c9e8fb\") " pod="openshift-marketplace/redhat-operators-22m5x" Feb 23 10:55:56 crc kubenswrapper[4904]: I0223 10:55:56.867640 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60cd78aa-7882-4c27-ac99-13f7a5c9e8fb-utilities\") pod \"redhat-operators-22m5x\" (UID: \"60cd78aa-7882-4c27-ac99-13f7a5c9e8fb\") " pod="openshift-marketplace/redhat-operators-22m5x" Feb 23 10:55:56 crc kubenswrapper[4904]: I0223 10:55:56.892803 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkmrx\" (UniqueName: \"kubernetes.io/projected/60cd78aa-7882-4c27-ac99-13f7a5c9e8fb-kube-api-access-kkmrx\") pod \"redhat-operators-22m5x\" (UID: \"60cd78aa-7882-4c27-ac99-13f7a5c9e8fb\") " pod="openshift-marketplace/redhat-operators-22m5x" Feb 23 10:55:56 crc kubenswrapper[4904]: I0223 10:55:56.926320 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-22m5x" Feb 23 10:55:57 crc kubenswrapper[4904]: I0223 10:55:57.082740 4904 generic.go:334] "Generic (PLEG): container finished" podID="2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103" containerID="a11e4b58fd61d25354d83bff47d1af5a2462d30e76f33920092a94afd4681e82" exitCode=0 Feb 23 10:55:57 crc kubenswrapper[4904]: I0223 10:55:57.082793 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t7qxv" event={"ID":"2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103","Type":"ContainerDied","Data":"a11e4b58fd61d25354d83bff47d1af5a2462d30e76f33920092a94afd4681e82"} Feb 23 10:55:57 crc kubenswrapper[4904]: I0223 10:55:57.083070 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t7qxv" event={"ID":"2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103","Type":"ContainerStarted","Data":"f2d77cd8d450b522d157bb084ebc328616be208436affca25cb289e841f2c745"} Feb 23 10:55:57 crc kubenswrapper[4904]: I0223 10:55:57.424792 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-22m5x"] Feb 23 10:55:58 crc kubenswrapper[4904]: I0223 10:55:58.092320 4904 generic.go:334] "Generic (PLEG): container finished" podID="60cd78aa-7882-4c27-ac99-13f7a5c9e8fb" containerID="2a9d44cfa6d6d306a0578f33ef03907b77ab6328a1f326ab3cbc178ccfd66255" exitCode=0 Feb 23 10:55:58 crc kubenswrapper[4904]: I0223 10:55:58.092368 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22m5x" event={"ID":"60cd78aa-7882-4c27-ac99-13f7a5c9e8fb","Type":"ContainerDied","Data":"2a9d44cfa6d6d306a0578f33ef03907b77ab6328a1f326ab3cbc178ccfd66255"} Feb 23 10:55:58 crc kubenswrapper[4904]: I0223 10:55:58.092603 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22m5x" event={"ID":"60cd78aa-7882-4c27-ac99-13f7a5c9e8fb","Type":"ContainerStarted","Data":"ecca700f53fabdc465f729bbca461ba16b462ce9d47bd2473380ebfa318455c1"} Feb 23 10:55:58 crc kubenswrapper[4904]: I0223 10:55:58.097807 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t7qxv" event={"ID":"2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103","Type":"ContainerStarted","Data":"42f8a6b589744dc1077d87ff3cf7a359171a56d6719852131985072fed1a69d2"} Feb 23 10:56:00 crc kubenswrapper[4904]: I0223 10:56:00.116606 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22m5x" event={"ID":"60cd78aa-7882-4c27-ac99-13f7a5c9e8fb","Type":"ContainerStarted","Data":"c333dd88dc70aacbda156d9f483c7d5ba78a129e707630573d26fcfc99d3181c"} Feb 23 10:56:00 crc kubenswrapper[4904]: I0223 10:56:00.118906 4904 generic.go:334] "Generic (PLEG): container finished" podID="2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103" containerID="42f8a6b589744dc1077d87ff3cf7a359171a56d6719852131985072fed1a69d2" exitCode=0 Feb 23 10:56:00 crc kubenswrapper[4904]: I0223 10:56:00.118934 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t7qxv" event={"ID":"2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103","Type":"ContainerDied","Data":"42f8a6b589744dc1077d87ff3cf7a359171a56d6719852131985072fed1a69d2"} Feb 23 10:56:00 crc kubenswrapper[4904]: I0223 10:56:00.256441 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 10:56:00 crc kubenswrapper[4904]: E0223 10:56:00.256840 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:56:01 crc kubenswrapper[4904]: I0223 10:56:01.133485 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t7qxv" event={"ID":"2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103","Type":"ContainerStarted","Data":"06fb300569e84fbbbb6035e1b63ec0ab9dd94185084e5d73ddc1b97d71d9d874"} Feb 23 10:56:01 crc kubenswrapper[4904]: I0223 10:56:01.163285 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t7qxv" podStartSLOduration=2.6728384309999997 podStartE2EDuration="6.163259679s" podCreationTimestamp="2026-02-23 10:55:55 +0000 UTC" firstStartedPulling="2026-02-23 10:55:57.087348504 +0000 UTC m=+2990.507722017" lastFinishedPulling="2026-02-23 10:56:00.577769742 +0000 UTC m=+2993.998143265" observedRunningTime="2026-02-23 10:56:01.150215597 +0000 UTC m=+2994.570589110" watchObservedRunningTime="2026-02-23 10:56:01.163259679 +0000 UTC m=+2994.583633202" Feb 23 10:56:05 crc kubenswrapper[4904]: I0223 10:56:05.175957 4904 generic.go:334] "Generic (PLEG): container finished" podID="60cd78aa-7882-4c27-ac99-13f7a5c9e8fb" containerID="c333dd88dc70aacbda156d9f483c7d5ba78a129e707630573d26fcfc99d3181c" exitCode=0 Feb 23 10:56:05 crc kubenswrapper[4904]: I0223 10:56:05.176059 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22m5x" event={"ID":"60cd78aa-7882-4c27-ac99-13f7a5c9e8fb","Type":"ContainerDied","Data":"c333dd88dc70aacbda156d9f483c7d5ba78a129e707630573d26fcfc99d3181c"} Feb 23 10:56:05 crc kubenswrapper[4904]: I0223 10:56:05.936954 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t7qxv" Feb 23 10:56:05 crc kubenswrapper[4904]: I0223 10:56:05.937298 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t7qxv" Feb 23 10:56:06 crc kubenswrapper[4904]: I0223 10:56:06.028553 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t7qxv" Feb 23 10:56:06 crc kubenswrapper[4904]: I0223 10:56:06.187978 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22m5x" event={"ID":"60cd78aa-7882-4c27-ac99-13f7a5c9e8fb","Type":"ContainerStarted","Data":"f01609b179d1c57b316edc7821a2cbbcb1e9a02fcb67a18adb08532d607b94ea"} Feb 23 10:56:06 crc kubenswrapper[4904]: I0223 10:56:06.215540 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-22m5x" podStartSLOduration=2.596882067 podStartE2EDuration="10.215519398s" podCreationTimestamp="2026-02-23 10:55:56 +0000 UTC" firstStartedPulling="2026-02-23 10:55:58.098580312 +0000 UTC m=+2991.518953825" lastFinishedPulling="2026-02-23 10:56:05.717217633 +0000 UTC m=+2999.137591156" observedRunningTime="2026-02-23 10:56:06.211166724 +0000 UTC m=+2999.631540237" watchObservedRunningTime="2026-02-23 10:56:06.215519398 +0000 UTC m=+2999.635892911" Feb 23 10:56:06 crc kubenswrapper[4904]: I0223 10:56:06.250148 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t7qxv" Feb 23 10:56:06 crc kubenswrapper[4904]: I0223 10:56:06.926519 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-22m5x" Feb 23 10:56:06 crc kubenswrapper[4904]: I0223 10:56:06.926596 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-22m5x" Feb 23 10:56:07 crc kubenswrapper[4904]: I0223 10:56:07.792065 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t7qxv"] Feb 23 10:56:08 crc kubenswrapper[4904]: I0223 10:56:08.002011 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-22m5x" podUID="60cd78aa-7882-4c27-ac99-13f7a5c9e8fb" containerName="registry-server" probeResult="failure" output=< Feb 23 10:56:08 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Feb 23 10:56:08 crc kubenswrapper[4904]: > Feb 23 10:56:08 crc kubenswrapper[4904]: I0223 10:56:08.208256 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t7qxv" podUID="2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103" containerName="registry-server" containerID="cri-o://06fb300569e84fbbbb6035e1b63ec0ab9dd94185084e5d73ddc1b97d71d9d874" gracePeriod=2 Feb 23 10:56:08 crc kubenswrapper[4904]: I0223 10:56:08.781969 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t7qxv" Feb 23 10:56:08 crc kubenswrapper[4904]: I0223 10:56:08.943374 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103-utilities\") pod \"2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103\" (UID: \"2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103\") " Feb 23 10:56:08 crc kubenswrapper[4904]: I0223 10:56:08.943558 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gczgn\" (UniqueName: \"kubernetes.io/projected/2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103-kube-api-access-gczgn\") pod \"2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103\" (UID: \"2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103\") " Feb 23 10:56:08 crc kubenswrapper[4904]: I0223 10:56:08.943625 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103-catalog-content\") pod \"2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103\" (UID: \"2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103\") " Feb 23 10:56:08 crc kubenswrapper[4904]: I0223 10:56:08.944190 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103-utilities" (OuterVolumeSpecName: "utilities") pod "2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103" (UID: "2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:56:08 crc kubenswrapper[4904]: I0223 10:56:08.952937 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103-kube-api-access-gczgn" (OuterVolumeSpecName: "kube-api-access-gczgn") pod "2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103" (UID: "2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103"). InnerVolumeSpecName "kube-api-access-gczgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:56:09 crc kubenswrapper[4904]: I0223 10:56:09.011058 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103" (UID: "2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:56:09 crc kubenswrapper[4904]: I0223 10:56:09.046543 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gczgn\" (UniqueName: \"kubernetes.io/projected/2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103-kube-api-access-gczgn\") on node \"crc\" DevicePath \"\"" Feb 23 10:56:09 crc kubenswrapper[4904]: I0223 10:56:09.046600 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:56:09 crc kubenswrapper[4904]: I0223 10:56:09.046618 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:56:09 crc kubenswrapper[4904]: I0223 10:56:09.221942 4904 generic.go:334] "Generic (PLEG): container finished" podID="2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103" containerID="06fb300569e84fbbbb6035e1b63ec0ab9dd94185084e5d73ddc1b97d71d9d874" exitCode=0 Feb 23 10:56:09 crc kubenswrapper[4904]: I0223 10:56:09.222001 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t7qxv" Feb 23 10:56:09 crc kubenswrapper[4904]: I0223 10:56:09.221997 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t7qxv" event={"ID":"2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103","Type":"ContainerDied","Data":"06fb300569e84fbbbb6035e1b63ec0ab9dd94185084e5d73ddc1b97d71d9d874"} Feb 23 10:56:09 crc kubenswrapper[4904]: I0223 10:56:09.222044 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t7qxv" event={"ID":"2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103","Type":"ContainerDied","Data":"f2d77cd8d450b522d157bb084ebc328616be208436affca25cb289e841f2c745"} Feb 23 10:56:09 crc kubenswrapper[4904]: I0223 10:56:09.222062 4904 scope.go:117] "RemoveContainer" containerID="06fb300569e84fbbbb6035e1b63ec0ab9dd94185084e5d73ddc1b97d71d9d874" Feb 23 10:56:09 crc kubenswrapper[4904]: I0223 10:56:09.266670 4904 scope.go:117] "RemoveContainer" containerID="42f8a6b589744dc1077d87ff3cf7a359171a56d6719852131985072fed1a69d2" Feb 23 10:56:09 crc kubenswrapper[4904]: I0223 10:56:09.280000 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t7qxv"] Feb 23 10:56:09 crc kubenswrapper[4904]: I0223 10:56:09.288941 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t7qxv"] Feb 23 10:56:09 crc kubenswrapper[4904]: I0223 10:56:09.307215 4904 scope.go:117] "RemoveContainer" containerID="a11e4b58fd61d25354d83bff47d1af5a2462d30e76f33920092a94afd4681e82" Feb 23 10:56:09 crc kubenswrapper[4904]: I0223 10:56:09.343128 4904 scope.go:117] "RemoveContainer" containerID="06fb300569e84fbbbb6035e1b63ec0ab9dd94185084e5d73ddc1b97d71d9d874" Feb 23 10:56:09 crc kubenswrapper[4904]: E0223 10:56:09.343590 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06fb300569e84fbbbb6035e1b63ec0ab9dd94185084e5d73ddc1b97d71d9d874\": container with ID starting with 06fb300569e84fbbbb6035e1b63ec0ab9dd94185084e5d73ddc1b97d71d9d874 not found: ID does not exist" containerID="06fb300569e84fbbbb6035e1b63ec0ab9dd94185084e5d73ddc1b97d71d9d874" Feb 23 10:56:09 crc kubenswrapper[4904]: I0223 10:56:09.343621 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06fb300569e84fbbbb6035e1b63ec0ab9dd94185084e5d73ddc1b97d71d9d874"} err="failed to get container status \"06fb300569e84fbbbb6035e1b63ec0ab9dd94185084e5d73ddc1b97d71d9d874\": rpc error: code = NotFound desc = could not find container \"06fb300569e84fbbbb6035e1b63ec0ab9dd94185084e5d73ddc1b97d71d9d874\": container with ID starting with 06fb300569e84fbbbb6035e1b63ec0ab9dd94185084e5d73ddc1b97d71d9d874 not found: ID does not exist" Feb 23 10:56:09 crc kubenswrapper[4904]: I0223 10:56:09.343641 4904 scope.go:117] "RemoveContainer" containerID="42f8a6b589744dc1077d87ff3cf7a359171a56d6719852131985072fed1a69d2" Feb 23 10:56:09 crc kubenswrapper[4904]: E0223 10:56:09.344080 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42f8a6b589744dc1077d87ff3cf7a359171a56d6719852131985072fed1a69d2\": container with ID starting with 42f8a6b589744dc1077d87ff3cf7a359171a56d6719852131985072fed1a69d2 not found: ID does not exist" containerID="42f8a6b589744dc1077d87ff3cf7a359171a56d6719852131985072fed1a69d2" Feb 23 10:56:09 crc kubenswrapper[4904]: I0223 10:56:09.344131 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42f8a6b589744dc1077d87ff3cf7a359171a56d6719852131985072fed1a69d2"} err="failed to get container status \"42f8a6b589744dc1077d87ff3cf7a359171a56d6719852131985072fed1a69d2\": rpc error: code = NotFound desc = could not find container \"42f8a6b589744dc1077d87ff3cf7a359171a56d6719852131985072fed1a69d2\": container with ID starting with 42f8a6b589744dc1077d87ff3cf7a359171a56d6719852131985072fed1a69d2 not found: ID does not exist" Feb 23 10:56:09 crc kubenswrapper[4904]: I0223 10:56:09.344161 4904 scope.go:117] "RemoveContainer" containerID="a11e4b58fd61d25354d83bff47d1af5a2462d30e76f33920092a94afd4681e82" Feb 23 10:56:09 crc kubenswrapper[4904]: E0223 10:56:09.344536 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a11e4b58fd61d25354d83bff47d1af5a2462d30e76f33920092a94afd4681e82\": container with ID starting with a11e4b58fd61d25354d83bff47d1af5a2462d30e76f33920092a94afd4681e82 not found: ID does not exist" containerID="a11e4b58fd61d25354d83bff47d1af5a2462d30e76f33920092a94afd4681e82" Feb 23 10:56:09 crc kubenswrapper[4904]: I0223 10:56:09.344563 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a11e4b58fd61d25354d83bff47d1af5a2462d30e76f33920092a94afd4681e82"} err="failed to get container status \"a11e4b58fd61d25354d83bff47d1af5a2462d30e76f33920092a94afd4681e82\": rpc error: code = NotFound desc = could not find container \"a11e4b58fd61d25354d83bff47d1af5a2462d30e76f33920092a94afd4681e82\": container with ID starting with a11e4b58fd61d25354d83bff47d1af5a2462d30e76f33920092a94afd4681e82 not found: ID does not exist" Feb 23 10:56:11 crc kubenswrapper[4904]: I0223 10:56:11.269193 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103" path="/var/lib/kubelet/pods/2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103/volumes" Feb 23 10:56:12 crc kubenswrapper[4904]: I0223 10:56:12.254829 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 10:56:12 crc kubenswrapper[4904]: E0223 10:56:12.255544 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:56:16 crc kubenswrapper[4904]: I0223 10:56:16.997635 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-22m5x" Feb 23 10:56:17 crc kubenswrapper[4904]: I0223 10:56:17.088986 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-22m5x" Feb 23 10:56:17 crc kubenswrapper[4904]: I0223 10:56:17.251107 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-22m5x"] Feb 23 10:56:18 crc kubenswrapper[4904]: I0223 10:56:18.312192 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-22m5x" podUID="60cd78aa-7882-4c27-ac99-13f7a5c9e8fb" containerName="registry-server" containerID="cri-o://f01609b179d1c57b316edc7821a2cbbcb1e9a02fcb67a18adb08532d607b94ea" gracePeriod=2 Feb 23 10:56:18 crc kubenswrapper[4904]: I0223 10:56:18.868976 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-22m5x" Feb 23 10:56:18 crc kubenswrapper[4904]: I0223 10:56:18.962152 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkmrx\" (UniqueName: \"kubernetes.io/projected/60cd78aa-7882-4c27-ac99-13f7a5c9e8fb-kube-api-access-kkmrx\") pod \"60cd78aa-7882-4c27-ac99-13f7a5c9e8fb\" (UID: \"60cd78aa-7882-4c27-ac99-13f7a5c9e8fb\") " Feb 23 10:56:18 crc kubenswrapper[4904]: I0223 10:56:18.962227 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60cd78aa-7882-4c27-ac99-13f7a5c9e8fb-catalog-content\") pod \"60cd78aa-7882-4c27-ac99-13f7a5c9e8fb\" (UID: \"60cd78aa-7882-4c27-ac99-13f7a5c9e8fb\") " Feb 23 10:56:18 crc kubenswrapper[4904]: I0223 10:56:18.962281 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60cd78aa-7882-4c27-ac99-13f7a5c9e8fb-utilities\") pod \"60cd78aa-7882-4c27-ac99-13f7a5c9e8fb\" (UID: \"60cd78aa-7882-4c27-ac99-13f7a5c9e8fb\") " Feb 23 10:56:18 crc kubenswrapper[4904]: I0223 10:56:18.963159 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60cd78aa-7882-4c27-ac99-13f7a5c9e8fb-utilities" (OuterVolumeSpecName: "utilities") pod "60cd78aa-7882-4c27-ac99-13f7a5c9e8fb" (UID: "60cd78aa-7882-4c27-ac99-13f7a5c9e8fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:56:18 crc kubenswrapper[4904]: I0223 10:56:18.977929 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60cd78aa-7882-4c27-ac99-13f7a5c9e8fb-kube-api-access-kkmrx" (OuterVolumeSpecName: "kube-api-access-kkmrx") pod "60cd78aa-7882-4c27-ac99-13f7a5c9e8fb" (UID: "60cd78aa-7882-4c27-ac99-13f7a5c9e8fb"). InnerVolumeSpecName "kube-api-access-kkmrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:56:19 crc kubenswrapper[4904]: I0223 10:56:19.064910 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60cd78aa-7882-4c27-ac99-13f7a5c9e8fb-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:56:19 crc kubenswrapper[4904]: I0223 10:56:19.065435 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkmrx\" (UniqueName: \"kubernetes.io/projected/60cd78aa-7882-4c27-ac99-13f7a5c9e8fb-kube-api-access-kkmrx\") on node \"crc\" DevicePath \"\"" Feb 23 10:56:19 crc kubenswrapper[4904]: I0223 10:56:19.091137 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60cd78aa-7882-4c27-ac99-13f7a5c9e8fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60cd78aa-7882-4c27-ac99-13f7a5c9e8fb" (UID: "60cd78aa-7882-4c27-ac99-13f7a5c9e8fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:56:19 crc kubenswrapper[4904]: I0223 10:56:19.166789 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60cd78aa-7882-4c27-ac99-13f7a5c9e8fb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:56:19 crc kubenswrapper[4904]: I0223 10:56:19.323482 4904 generic.go:334] "Generic (PLEG): container finished" podID="60cd78aa-7882-4c27-ac99-13f7a5c9e8fb" containerID="f01609b179d1c57b316edc7821a2cbbcb1e9a02fcb67a18adb08532d607b94ea" exitCode=0 Feb 23 10:56:19 crc kubenswrapper[4904]: I0223 10:56:19.323806 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22m5x" event={"ID":"60cd78aa-7882-4c27-ac99-13f7a5c9e8fb","Type":"ContainerDied","Data":"f01609b179d1c57b316edc7821a2cbbcb1e9a02fcb67a18adb08532d607b94ea"} Feb 23 10:56:19 crc kubenswrapper[4904]: I0223 10:56:19.324839 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22m5x" event={"ID":"60cd78aa-7882-4c27-ac99-13f7a5c9e8fb","Type":"ContainerDied","Data":"ecca700f53fabdc465f729bbca461ba16b462ce9d47bd2473380ebfa318455c1"} Feb 23 10:56:19 crc kubenswrapper[4904]: I0223 10:56:19.323910 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-22m5x" Feb 23 10:56:19 crc kubenswrapper[4904]: I0223 10:56:19.324932 4904 scope.go:117] "RemoveContainer" containerID="f01609b179d1c57b316edc7821a2cbbcb1e9a02fcb67a18adb08532d607b94ea" Feb 23 10:56:19 crc kubenswrapper[4904]: I0223 10:56:19.360932 4904 scope.go:117] "RemoveContainer" containerID="c333dd88dc70aacbda156d9f483c7d5ba78a129e707630573d26fcfc99d3181c" Feb 23 10:56:19 crc kubenswrapper[4904]: I0223 10:56:19.365098 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-22m5x"] Feb 23 10:56:19 crc kubenswrapper[4904]: I0223 10:56:19.393409 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-22m5x"] Feb 23 10:56:19 crc kubenswrapper[4904]: I0223 10:56:19.440469 4904 scope.go:117] "RemoveContainer" containerID="2a9d44cfa6d6d306a0578f33ef03907b77ab6328a1f326ab3cbc178ccfd66255" Feb 23 10:56:19 crc kubenswrapper[4904]: I0223 10:56:19.520365 4904 scope.go:117] "RemoveContainer" containerID="f01609b179d1c57b316edc7821a2cbbcb1e9a02fcb67a18adb08532d607b94ea" Feb 23 10:56:19 crc kubenswrapper[4904]: E0223 10:56:19.521819 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f01609b179d1c57b316edc7821a2cbbcb1e9a02fcb67a18adb08532d607b94ea\": container with ID starting with f01609b179d1c57b316edc7821a2cbbcb1e9a02fcb67a18adb08532d607b94ea not found: ID does not exist" containerID="f01609b179d1c57b316edc7821a2cbbcb1e9a02fcb67a18adb08532d607b94ea" Feb 23 10:56:19 crc kubenswrapper[4904]: I0223 10:56:19.521847 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f01609b179d1c57b316edc7821a2cbbcb1e9a02fcb67a18adb08532d607b94ea"} err="failed to get container status \"f01609b179d1c57b316edc7821a2cbbcb1e9a02fcb67a18adb08532d607b94ea\": rpc error: code = NotFound desc = could not find container \"f01609b179d1c57b316edc7821a2cbbcb1e9a02fcb67a18adb08532d607b94ea\": container with ID starting with f01609b179d1c57b316edc7821a2cbbcb1e9a02fcb67a18adb08532d607b94ea not found: ID does not exist" Feb 23 10:56:19 crc kubenswrapper[4904]: I0223 10:56:19.521871 4904 scope.go:117] "RemoveContainer" containerID="c333dd88dc70aacbda156d9f483c7d5ba78a129e707630573d26fcfc99d3181c" Feb 23 10:56:19 crc kubenswrapper[4904]: E0223 10:56:19.531871 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c333dd88dc70aacbda156d9f483c7d5ba78a129e707630573d26fcfc99d3181c\": container with ID starting with c333dd88dc70aacbda156d9f483c7d5ba78a129e707630573d26fcfc99d3181c not found: ID does not exist" containerID="c333dd88dc70aacbda156d9f483c7d5ba78a129e707630573d26fcfc99d3181c" Feb 23 10:56:19 crc kubenswrapper[4904]: I0223 10:56:19.532058 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c333dd88dc70aacbda156d9f483c7d5ba78a129e707630573d26fcfc99d3181c"} err="failed to get container status \"c333dd88dc70aacbda156d9f483c7d5ba78a129e707630573d26fcfc99d3181c\": rpc error: code = NotFound desc = could not find container \"c333dd88dc70aacbda156d9f483c7d5ba78a129e707630573d26fcfc99d3181c\": container with ID starting with c333dd88dc70aacbda156d9f483c7d5ba78a129e707630573d26fcfc99d3181c not found: ID does not exist" Feb 23 10:56:19 crc kubenswrapper[4904]: I0223 10:56:19.532141 4904 scope.go:117] "RemoveContainer" containerID="2a9d44cfa6d6d306a0578f33ef03907b77ab6328a1f326ab3cbc178ccfd66255" Feb 23 10:56:19 crc kubenswrapper[4904]: E0223 10:56:19.535791 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a9d44cfa6d6d306a0578f33ef03907b77ab6328a1f326ab3cbc178ccfd66255\": container with ID starting with 2a9d44cfa6d6d306a0578f33ef03907b77ab6328a1f326ab3cbc178ccfd66255 not found: ID does not exist" containerID="2a9d44cfa6d6d306a0578f33ef03907b77ab6328a1f326ab3cbc178ccfd66255" Feb 23 10:56:19 crc kubenswrapper[4904]: I0223 10:56:19.535912 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a9d44cfa6d6d306a0578f33ef03907b77ab6328a1f326ab3cbc178ccfd66255"} err="failed to get container status \"2a9d44cfa6d6d306a0578f33ef03907b77ab6328a1f326ab3cbc178ccfd66255\": rpc error: code = NotFound desc = could not find container \"2a9d44cfa6d6d306a0578f33ef03907b77ab6328a1f326ab3cbc178ccfd66255\": container with ID starting with 2a9d44cfa6d6d306a0578f33ef03907b77ab6328a1f326ab3cbc178ccfd66255 not found: ID does not exist" Feb 23 10:56:21 crc kubenswrapper[4904]: I0223 10:56:21.273466 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60cd78aa-7882-4c27-ac99-13f7a5c9e8fb" path="/var/lib/kubelet/pods/60cd78aa-7882-4c27-ac99-13f7a5c9e8fb/volumes" Feb 23 10:56:27 crc kubenswrapper[4904]: I0223 10:56:27.266558 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 10:56:27 crc kubenswrapper[4904]: E0223 10:56:27.267575 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:56:39 crc kubenswrapper[4904]: I0223 10:56:39.256279 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 10:56:39 crc kubenswrapper[4904]: E0223 10:56:39.257219 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:56:52 crc kubenswrapper[4904]: I0223 10:56:52.256769 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 10:56:52 crc kubenswrapper[4904]: E0223 10:56:52.257866 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:57:07 crc kubenswrapper[4904]: I0223 10:57:07.289220 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 10:57:07 crc kubenswrapper[4904]: E0223 10:57:07.290485 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:57:20 crc kubenswrapper[4904]: I0223 10:57:20.258521 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 10:57:20 crc kubenswrapper[4904]: E0223 10:57:20.259154 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:57:32 crc kubenswrapper[4904]: I0223 10:57:32.255681 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 10:57:32 crc kubenswrapper[4904]: E0223 10:57:32.256874 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:57:43 crc kubenswrapper[4904]: I0223 10:57:43.255672 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 10:57:43 crc kubenswrapper[4904]: E0223 10:57:43.256631 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:57:55 crc kubenswrapper[4904]: I0223 10:57:55.255502 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 10:57:55 crc kubenswrapper[4904]: E0223 10:57:55.256400 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:58:06 crc kubenswrapper[4904]: I0223 10:58:06.468530 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9c2qw"] Feb 23 10:58:06 crc kubenswrapper[4904]: E0223 10:58:06.469798 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103" containerName="extract-utilities" Feb 23 10:58:06 crc kubenswrapper[4904]: I0223 10:58:06.469821 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103" containerName="extract-utilities" Feb 23 10:58:06 crc kubenswrapper[4904]: E0223 10:58:06.469837 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60cd78aa-7882-4c27-ac99-13f7a5c9e8fb" containerName="extract-utilities" Feb 23 10:58:06 crc kubenswrapper[4904]: I0223 10:58:06.469845 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="60cd78aa-7882-4c27-ac99-13f7a5c9e8fb" containerName="extract-utilities" Feb 23 10:58:06 crc kubenswrapper[4904]: E0223 10:58:06.469867 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60cd78aa-7882-4c27-ac99-13f7a5c9e8fb" containerName="registry-server" Feb 23 10:58:06 crc kubenswrapper[4904]: I0223 10:58:06.469877 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="60cd78aa-7882-4c27-ac99-13f7a5c9e8fb" containerName="registry-server" Feb 23 10:58:06 crc kubenswrapper[4904]: E0223 10:58:06.469912 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103" containerName="extract-content" Feb 23 10:58:06 crc kubenswrapper[4904]: I0223 10:58:06.469923 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103" containerName="extract-content" Feb 23 10:58:06 crc kubenswrapper[4904]: E0223 10:58:06.469942 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103" containerName="registry-server" Feb 23 10:58:06 crc kubenswrapper[4904]: I0223 10:58:06.469951 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103" containerName="registry-server" Feb 23 10:58:06 crc kubenswrapper[4904]: E0223 10:58:06.469979 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60cd78aa-7882-4c27-ac99-13f7a5c9e8fb" containerName="extract-content" Feb 23 10:58:06 crc kubenswrapper[4904]: I0223 10:58:06.469987 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="60cd78aa-7882-4c27-ac99-13f7a5c9e8fb" containerName="extract-content" Feb 23 10:58:06 crc kubenswrapper[4904]: I0223 10:58:06.470232 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa3d5b1-0dcf-44e8-b0e6-27c4b3908103" containerName="registry-server" Feb 23 10:58:06 crc kubenswrapper[4904]: I0223 10:58:06.470257 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="60cd78aa-7882-4c27-ac99-13f7a5c9e8fb" containerName="registry-server" Feb 23 10:58:06 crc kubenswrapper[4904]: I0223 10:58:06.472134 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c2qw" Feb 23 10:58:06 crc kubenswrapper[4904]: I0223 10:58:06.486559 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c2qw"] Feb 23 10:58:06 crc kubenswrapper[4904]: I0223 10:58:06.522289 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jzcj\" (UniqueName: \"kubernetes.io/projected/bc006315-91d9-4769-b779-dccf14aefb92-kube-api-access-8jzcj\") pod \"redhat-marketplace-9c2qw\" (UID: \"bc006315-91d9-4769-b779-dccf14aefb92\") " pod="openshift-marketplace/redhat-marketplace-9c2qw" Feb 23 10:58:06 crc kubenswrapper[4904]: I0223 10:58:06.522397 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc006315-91d9-4769-b779-dccf14aefb92-utilities\") pod \"redhat-marketplace-9c2qw\" (UID: \"bc006315-91d9-4769-b779-dccf14aefb92\") " pod="openshift-marketplace/redhat-marketplace-9c2qw" Feb 23 10:58:06 crc kubenswrapper[4904]: I0223 10:58:06.522445 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc006315-91d9-4769-b779-dccf14aefb92-catalog-content\") pod \"redhat-marketplace-9c2qw\" (UID: \"bc006315-91d9-4769-b779-dccf14aefb92\") " pod="openshift-marketplace/redhat-marketplace-9c2qw" Feb 23 10:58:06 crc kubenswrapper[4904]: I0223 10:58:06.624277 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jzcj\" (UniqueName: \"kubernetes.io/projected/bc006315-91d9-4769-b779-dccf14aefb92-kube-api-access-8jzcj\") pod \"redhat-marketplace-9c2qw\" (UID: \"bc006315-91d9-4769-b779-dccf14aefb92\") " pod="openshift-marketplace/redhat-marketplace-9c2qw" Feb 23 10:58:06 crc kubenswrapper[4904]: I0223 10:58:06.624358 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc006315-91d9-4769-b779-dccf14aefb92-utilities\") pod \"redhat-marketplace-9c2qw\" (UID: \"bc006315-91d9-4769-b779-dccf14aefb92\") " pod="openshift-marketplace/redhat-marketplace-9c2qw" Feb 23 10:58:06 crc kubenswrapper[4904]: I0223 10:58:06.624395 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc006315-91d9-4769-b779-dccf14aefb92-catalog-content\") pod \"redhat-marketplace-9c2qw\" (UID: \"bc006315-91d9-4769-b779-dccf14aefb92\") " pod="openshift-marketplace/redhat-marketplace-9c2qw" Feb 23 10:58:06 crc kubenswrapper[4904]: I0223 10:58:06.625196 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc006315-91d9-4769-b779-dccf14aefb92-utilities\") pod \"redhat-marketplace-9c2qw\" (UID: \"bc006315-91d9-4769-b779-dccf14aefb92\") " pod="openshift-marketplace/redhat-marketplace-9c2qw" Feb 23 10:58:06 crc kubenswrapper[4904]: I0223 10:58:06.625237 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc006315-91d9-4769-b779-dccf14aefb92-catalog-content\") pod \"redhat-marketplace-9c2qw\" (UID: \"bc006315-91d9-4769-b779-dccf14aefb92\") " pod="openshift-marketplace/redhat-marketplace-9c2qw" Feb 23 10:58:06 crc kubenswrapper[4904]: I0223 10:58:06.664403 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jzcj\" (UniqueName: \"kubernetes.io/projected/bc006315-91d9-4769-b779-dccf14aefb92-kube-api-access-8jzcj\") pod \"redhat-marketplace-9c2qw\" (UID: \"bc006315-91d9-4769-b779-dccf14aefb92\") " pod="openshift-marketplace/redhat-marketplace-9c2qw" Feb 23 10:58:06 crc kubenswrapper[4904]: I0223 10:58:06.794501 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c2qw" Feb 23 10:58:07 crc kubenswrapper[4904]: W0223 10:58:07.272644 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc006315_91d9_4769_b779_dccf14aefb92.slice/crio-a9ab2c204e890c45b2845183a00d13d89f76e2321f478077a1806ba10090ffec WatchSource:0}: Error finding container a9ab2c204e890c45b2845183a00d13d89f76e2321f478077a1806ba10090ffec: Status 404 returned error can't find the container with id a9ab2c204e890c45b2845183a00d13d89f76e2321f478077a1806ba10090ffec Feb 23 10:58:07 crc kubenswrapper[4904]: I0223 10:58:07.292785 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 10:58:07 crc kubenswrapper[4904]: I0223 10:58:07.294917 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c2qw"] Feb 23 10:58:07 crc kubenswrapper[4904]: E0223 10:58:07.296446 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:58:07 crc kubenswrapper[4904]: I0223 10:58:07.583583 4904 generic.go:334] "Generic (PLEG): container finished" podID="bc006315-91d9-4769-b779-dccf14aefb92" containerID="0e67f20a4b81e0598853b4a19e20df9cae63f1d7b1d98241d4deb3b1ac0ff5de" exitCode=0 Feb 23 10:58:07 crc kubenswrapper[4904]: I0223 10:58:07.583746 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c2qw" event={"ID":"bc006315-91d9-4769-b779-dccf14aefb92","Type":"ContainerDied","Data":"0e67f20a4b81e0598853b4a19e20df9cae63f1d7b1d98241d4deb3b1ac0ff5de"} Feb 23 10:58:07 crc kubenswrapper[4904]: I0223 10:58:07.583934 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c2qw" event={"ID":"bc006315-91d9-4769-b779-dccf14aefb92","Type":"ContainerStarted","Data":"a9ab2c204e890c45b2845183a00d13d89f76e2321f478077a1806ba10090ffec"} Feb 23 10:58:07 crc kubenswrapper[4904]: I0223 10:58:07.625598 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 10:58:08 crc kubenswrapper[4904]: I0223 10:58:08.600974 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c2qw" event={"ID":"bc006315-91d9-4769-b779-dccf14aefb92","Type":"ContainerStarted","Data":"5c7c6994eb7b28960b28d8c16972722334d38f60087e6aaa1591535b9865a6ef"} Feb 23 10:58:09 crc kubenswrapper[4904]: I0223 10:58:09.615319 4904 generic.go:334] "Generic (PLEG): container finished" podID="bc006315-91d9-4769-b779-dccf14aefb92" containerID="5c7c6994eb7b28960b28d8c16972722334d38f60087e6aaa1591535b9865a6ef" exitCode=0 Feb 23 10:58:09 crc kubenswrapper[4904]: I0223 10:58:09.615387 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c2qw" event={"ID":"bc006315-91d9-4769-b779-dccf14aefb92","Type":"ContainerDied","Data":"5c7c6994eb7b28960b28d8c16972722334d38f60087e6aaa1591535b9865a6ef"} Feb 23 10:58:10 crc kubenswrapper[4904]: I0223 10:58:10.632559 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c2qw" event={"ID":"bc006315-91d9-4769-b779-dccf14aefb92","Type":"ContainerStarted","Data":"c01c723ea4fd32ef00288a88a72c368645fb1384a87569f4685667557712777e"} Feb 23 10:58:10 crc kubenswrapper[4904]: I0223 10:58:10.663932 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9c2qw" podStartSLOduration=2.246579587 podStartE2EDuration="4.663915208s" podCreationTimestamp="2026-02-23 10:58:06 +0000 UTC" firstStartedPulling="2026-02-23 10:58:07.625328443 +0000 UTC m=+3121.045701956" lastFinishedPulling="2026-02-23 10:58:10.042664044 +0000 UTC m=+3123.463037577" observedRunningTime="2026-02-23 10:58:10.654163841 +0000 UTC m=+3124.074537394" watchObservedRunningTime="2026-02-23 10:58:10.663915208 +0000 UTC m=+3124.084288721" Feb 23 10:58:16 crc kubenswrapper[4904]: I0223 10:58:16.795742 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9c2qw" Feb 23 10:58:16 crc kubenswrapper[4904]: I0223 10:58:16.796376 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9c2qw" Feb 23 10:58:16 crc kubenswrapper[4904]: I0223 10:58:16.872909 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9c2qw" Feb 23 10:58:17 crc kubenswrapper[4904]: I0223 10:58:17.800898 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9c2qw" Feb 23 10:58:17 crc kubenswrapper[4904]: I0223 10:58:17.871090 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c2qw"] Feb 23 10:58:19 crc kubenswrapper[4904]: I0223 10:58:19.255932 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 10:58:19 crc kubenswrapper[4904]: E0223 10:58:19.257155 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:58:19 crc kubenswrapper[4904]: I0223 10:58:19.739086 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9c2qw" podUID="bc006315-91d9-4769-b779-dccf14aefb92" containerName="registry-server" containerID="cri-o://c01c723ea4fd32ef00288a88a72c368645fb1384a87569f4685667557712777e" gracePeriod=2 Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.385576 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c2qw" Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.481991 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc006315-91d9-4769-b779-dccf14aefb92-utilities\") pod \"bc006315-91d9-4769-b779-dccf14aefb92\" (UID: \"bc006315-91d9-4769-b779-dccf14aefb92\") " Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.482156 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jzcj\" (UniqueName: \"kubernetes.io/projected/bc006315-91d9-4769-b779-dccf14aefb92-kube-api-access-8jzcj\") pod \"bc006315-91d9-4769-b779-dccf14aefb92\" (UID: \"bc006315-91d9-4769-b779-dccf14aefb92\") " Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.482191 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc006315-91d9-4769-b779-dccf14aefb92-catalog-content\") pod \"bc006315-91d9-4769-b779-dccf14aefb92\" (UID: \"bc006315-91d9-4769-b779-dccf14aefb92\") " Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.482942 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc006315-91d9-4769-b779-dccf14aefb92-utilities" (OuterVolumeSpecName: "utilities") pod "bc006315-91d9-4769-b779-dccf14aefb92" (UID: "bc006315-91d9-4769-b779-dccf14aefb92"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.488917 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc006315-91d9-4769-b779-dccf14aefb92-kube-api-access-8jzcj" (OuterVolumeSpecName: "kube-api-access-8jzcj") pod "bc006315-91d9-4769-b779-dccf14aefb92" (UID: "bc006315-91d9-4769-b779-dccf14aefb92"). InnerVolumeSpecName "kube-api-access-8jzcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.520836 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc006315-91d9-4769-b779-dccf14aefb92-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc006315-91d9-4769-b779-dccf14aefb92" (UID: "bc006315-91d9-4769-b779-dccf14aefb92"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.584441 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc006315-91d9-4769-b779-dccf14aefb92-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.584480 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jzcj\" (UniqueName: \"kubernetes.io/projected/bc006315-91d9-4769-b779-dccf14aefb92-kube-api-access-8jzcj\") on node \"crc\" DevicePath \"\"" Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.584491 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc006315-91d9-4769-b779-dccf14aefb92-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.756203 4904 generic.go:334] "Generic (PLEG): container finished" podID="bc006315-91d9-4769-b779-dccf14aefb92" containerID="c01c723ea4fd32ef00288a88a72c368645fb1384a87569f4685667557712777e" exitCode=0 Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.756265 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c2qw" Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.756295 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c2qw" event={"ID":"bc006315-91d9-4769-b779-dccf14aefb92","Type":"ContainerDied","Data":"c01c723ea4fd32ef00288a88a72c368645fb1384a87569f4685667557712777e"} Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.756868 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c2qw" event={"ID":"bc006315-91d9-4769-b779-dccf14aefb92","Type":"ContainerDied","Data":"a9ab2c204e890c45b2845183a00d13d89f76e2321f478077a1806ba10090ffec"} Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.756913 4904 scope.go:117] "RemoveContainer" containerID="c01c723ea4fd32ef00288a88a72c368645fb1384a87569f4685667557712777e" Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.789293 4904 scope.go:117] "RemoveContainer" containerID="5c7c6994eb7b28960b28d8c16972722334d38f60087e6aaa1591535b9865a6ef" Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.811745 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c2qw"] Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.824504 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c2qw"] Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.847698 4904 scope.go:117] "RemoveContainer" containerID="0e67f20a4b81e0598853b4a19e20df9cae63f1d7b1d98241d4deb3b1ac0ff5de" Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.909066 4904 scope.go:117] "RemoveContainer" containerID="c01c723ea4fd32ef00288a88a72c368645fb1384a87569f4685667557712777e" Feb 23 10:58:20 crc kubenswrapper[4904]: E0223 10:58:20.909774 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c01c723ea4fd32ef00288a88a72c368645fb1384a87569f4685667557712777e\": container with ID starting with c01c723ea4fd32ef00288a88a72c368645fb1384a87569f4685667557712777e not found: ID does not exist" containerID="c01c723ea4fd32ef00288a88a72c368645fb1384a87569f4685667557712777e" Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.909937 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c01c723ea4fd32ef00288a88a72c368645fb1384a87569f4685667557712777e"} err="failed to get container status \"c01c723ea4fd32ef00288a88a72c368645fb1384a87569f4685667557712777e\": rpc error: code = NotFound desc = could not find container \"c01c723ea4fd32ef00288a88a72c368645fb1384a87569f4685667557712777e\": container with ID starting with c01c723ea4fd32ef00288a88a72c368645fb1384a87569f4685667557712777e not found: ID does not exist" Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.910075 4904 scope.go:117] "RemoveContainer" containerID="5c7c6994eb7b28960b28d8c16972722334d38f60087e6aaa1591535b9865a6ef" Feb 23 10:58:20 crc kubenswrapper[4904]: E0223 10:58:20.910617 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c7c6994eb7b28960b28d8c16972722334d38f60087e6aaa1591535b9865a6ef\": container with ID starting with 5c7c6994eb7b28960b28d8c16972722334d38f60087e6aaa1591535b9865a6ef not found: ID does not exist" containerID="5c7c6994eb7b28960b28d8c16972722334d38f60087e6aaa1591535b9865a6ef" Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.910655 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c7c6994eb7b28960b28d8c16972722334d38f60087e6aaa1591535b9865a6ef"} err="failed to get container status \"5c7c6994eb7b28960b28d8c16972722334d38f60087e6aaa1591535b9865a6ef\": rpc error: code = NotFound desc = could not find container \"5c7c6994eb7b28960b28d8c16972722334d38f60087e6aaa1591535b9865a6ef\": container with ID starting with 5c7c6994eb7b28960b28d8c16972722334d38f60087e6aaa1591535b9865a6ef not found: ID does not exist" Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.910680 4904 scope.go:117] "RemoveContainer" containerID="0e67f20a4b81e0598853b4a19e20df9cae63f1d7b1d98241d4deb3b1ac0ff5de" Feb 23 10:58:20 crc kubenswrapper[4904]: E0223 10:58:20.911291 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e67f20a4b81e0598853b4a19e20df9cae63f1d7b1d98241d4deb3b1ac0ff5de\": container with ID starting with 0e67f20a4b81e0598853b4a19e20df9cae63f1d7b1d98241d4deb3b1ac0ff5de not found: ID does not exist" containerID="0e67f20a4b81e0598853b4a19e20df9cae63f1d7b1d98241d4deb3b1ac0ff5de" Feb 23 10:58:20 crc kubenswrapper[4904]: I0223 10:58:20.911322 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e67f20a4b81e0598853b4a19e20df9cae63f1d7b1d98241d4deb3b1ac0ff5de"} err="failed to get container status \"0e67f20a4b81e0598853b4a19e20df9cae63f1d7b1d98241d4deb3b1ac0ff5de\": rpc error: code = NotFound desc = could not find container \"0e67f20a4b81e0598853b4a19e20df9cae63f1d7b1d98241d4deb3b1ac0ff5de\": container with ID starting with 0e67f20a4b81e0598853b4a19e20df9cae63f1d7b1d98241d4deb3b1ac0ff5de not found: ID does not exist" Feb 23 10:58:21 crc kubenswrapper[4904]: I0223 10:58:21.267563 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc006315-91d9-4769-b779-dccf14aefb92" path="/var/lib/kubelet/pods/bc006315-91d9-4769-b779-dccf14aefb92/volumes" Feb 23 10:58:32 crc kubenswrapper[4904]: I0223 10:58:32.255210 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 10:58:32 crc kubenswrapper[4904]: E0223 10:58:32.256092 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:58:47 crc kubenswrapper[4904]: I0223 10:58:47.269261 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 10:58:47 crc kubenswrapper[4904]: E0223 10:58:47.270978 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:59:02 crc kubenswrapper[4904]: I0223 10:59:02.722778 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 10:59:02 crc kubenswrapper[4904]: E0223 10:59:02.723556 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:59:15 crc kubenswrapper[4904]: E0223 10:59:15.934349 4904 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.138:38112->38.102.83.138:41413: read tcp 38.102.83.138:38112->38.102.83.138:41413: read: connection reset by peer Feb 23 10:59:17 crc kubenswrapper[4904]: I0223 10:59:17.274530 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 10:59:17 crc kubenswrapper[4904]: E0223 10:59:17.275292 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:59:32 crc kubenswrapper[4904]: I0223 10:59:32.256270 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 10:59:32 crc kubenswrapper[4904]: E0223 10:59:32.258764 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 10:59:47 crc kubenswrapper[4904]: I0223 10:59:47.270153 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 10:59:47 crc kubenswrapper[4904]: E0223 10:59:47.271953 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:00:00 crc kubenswrapper[4904]: I0223 11:00:00.162389 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530740-54cjg"] Feb 23 11:00:00 crc kubenswrapper[4904]: E0223 11:00:00.163508 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc006315-91d9-4769-b779-dccf14aefb92" containerName="extract-utilities" Feb 23 11:00:00 crc kubenswrapper[4904]: I0223 11:00:00.163528 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc006315-91d9-4769-b779-dccf14aefb92" containerName="extract-utilities" Feb 23 11:00:00 crc kubenswrapper[4904]: E0223 11:00:00.163545 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc006315-91d9-4769-b779-dccf14aefb92" containerName="extract-content" Feb 23 11:00:00 crc kubenswrapper[4904]: I0223 11:00:00.163554 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc006315-91d9-4769-b779-dccf14aefb92" containerName="extract-content" Feb 23 11:00:00 crc kubenswrapper[4904]: E0223 11:00:00.163578 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc006315-91d9-4769-b779-dccf14aefb92" containerName="registry-server" Feb 23 11:00:00 crc kubenswrapper[4904]: I0223 11:00:00.163586 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc006315-91d9-4769-b779-dccf14aefb92" containerName="registry-server" Feb 23 11:00:00 crc kubenswrapper[4904]: I0223 11:00:00.163891 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc006315-91d9-4769-b779-dccf14aefb92" containerName="registry-server" Feb 23 11:00:00 crc kubenswrapper[4904]: I0223 11:00:00.164951 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530740-54cjg" Feb 23 11:00:00 crc kubenswrapper[4904]: I0223 11:00:00.167934 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 11:00:00 crc kubenswrapper[4904]: I0223 11:00:00.168762 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 11:00:00 crc kubenswrapper[4904]: I0223 11:00:00.177415 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530740-54cjg"] Feb 23 11:00:00 crc kubenswrapper[4904]: I0223 11:00:00.207466 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4396219-7538-4724-bdde-e3920b91d390-secret-volume\") pod \"collect-profiles-29530740-54cjg\" (UID: \"c4396219-7538-4724-bdde-e3920b91d390\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530740-54cjg" Feb 23 11:00:00 crc kubenswrapper[4904]: I0223 11:00:00.207526 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sd62\" (UniqueName: \"kubernetes.io/projected/c4396219-7538-4724-bdde-e3920b91d390-kube-api-access-4sd62\") pod \"collect-profiles-29530740-54cjg\" (UID: \"c4396219-7538-4724-bdde-e3920b91d390\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530740-54cjg" Feb 23 11:00:00 crc kubenswrapper[4904]: I0223 11:00:00.207605 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4396219-7538-4724-bdde-e3920b91d390-config-volume\") pod \"collect-profiles-29530740-54cjg\" (UID: \"c4396219-7538-4724-bdde-e3920b91d390\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530740-54cjg" Feb 23 11:00:00 crc kubenswrapper[4904]: I0223 11:00:00.256017 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 11:00:00 crc kubenswrapper[4904]: E0223 11:00:00.256280 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:00:00 crc kubenswrapper[4904]: I0223 11:00:00.309098 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4396219-7538-4724-bdde-e3920b91d390-secret-volume\") pod \"collect-profiles-29530740-54cjg\" (UID: \"c4396219-7538-4724-bdde-e3920b91d390\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530740-54cjg" Feb 23 11:00:00 crc kubenswrapper[4904]: I0223 11:00:00.309186 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sd62\" (UniqueName: \"kubernetes.io/projected/c4396219-7538-4724-bdde-e3920b91d390-kube-api-access-4sd62\") pod \"collect-profiles-29530740-54cjg\" (UID: \"c4396219-7538-4724-bdde-e3920b91d390\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530740-54cjg" Feb 23 11:00:00 crc kubenswrapper[4904]: I0223 11:00:00.309248 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4396219-7538-4724-bdde-e3920b91d390-config-volume\") pod \"collect-profiles-29530740-54cjg\" (UID: \"c4396219-7538-4724-bdde-e3920b91d390\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530740-54cjg" Feb 23 11:00:00 crc kubenswrapper[4904]: I0223 11:00:00.310162 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4396219-7538-4724-bdde-e3920b91d390-config-volume\") pod \"collect-profiles-29530740-54cjg\" (UID: \"c4396219-7538-4724-bdde-e3920b91d390\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530740-54cjg" Feb 23 11:00:00 crc kubenswrapper[4904]: I0223 11:00:00.320678 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4396219-7538-4724-bdde-e3920b91d390-secret-volume\") pod \"collect-profiles-29530740-54cjg\" (UID: \"c4396219-7538-4724-bdde-e3920b91d390\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530740-54cjg" Feb 23 11:00:00 crc kubenswrapper[4904]: I0223 11:00:00.327376 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sd62\" (UniqueName: \"kubernetes.io/projected/c4396219-7538-4724-bdde-e3920b91d390-kube-api-access-4sd62\") pod \"collect-profiles-29530740-54cjg\" (UID: \"c4396219-7538-4724-bdde-e3920b91d390\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530740-54cjg" Feb 23 11:00:00 crc kubenswrapper[4904]: I0223 11:00:00.491156 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530740-54cjg" Feb 23 11:00:00 crc kubenswrapper[4904]: I0223 11:00:00.989118 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530740-54cjg"] Feb 23 11:00:01 crc kubenswrapper[4904]: I0223 11:00:01.438291 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530740-54cjg" event={"ID":"c4396219-7538-4724-bdde-e3920b91d390","Type":"ContainerStarted","Data":"a18b3ccf43b1e36cf7f6fb6a064a92d92eb0d14562f8c98089a50baf9ba95624"} Feb 23 11:00:01 crc kubenswrapper[4904]: I0223 11:00:01.438650 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530740-54cjg" event={"ID":"c4396219-7538-4724-bdde-e3920b91d390","Type":"ContainerStarted","Data":"dc8bb7db22fa592d8043df4e74b00ea67c5439664429706113ed4a82cbaa3cce"} Feb 23 11:00:02 crc kubenswrapper[4904]: I0223 11:00:02.449806 4904 generic.go:334] "Generic (PLEG): container finished" podID="c4396219-7538-4724-bdde-e3920b91d390" containerID="a18b3ccf43b1e36cf7f6fb6a064a92d92eb0d14562f8c98089a50baf9ba95624" exitCode=0 Feb 23 11:00:02 crc kubenswrapper[4904]: I0223 11:00:02.449907 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530740-54cjg" event={"ID":"c4396219-7538-4724-bdde-e3920b91d390","Type":"ContainerDied","Data":"a18b3ccf43b1e36cf7f6fb6a064a92d92eb0d14562f8c98089a50baf9ba95624"} Feb 23 11:00:03 crc kubenswrapper[4904]: I0223 11:00:03.831500 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530740-54cjg" Feb 23 11:00:03 crc kubenswrapper[4904]: I0223 11:00:03.892397 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4396219-7538-4724-bdde-e3920b91d390-secret-volume\") pod \"c4396219-7538-4724-bdde-e3920b91d390\" (UID: \"c4396219-7538-4724-bdde-e3920b91d390\") " Feb 23 11:00:03 crc kubenswrapper[4904]: I0223 11:00:03.892635 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sd62\" (UniqueName: \"kubernetes.io/projected/c4396219-7538-4724-bdde-e3920b91d390-kube-api-access-4sd62\") pod \"c4396219-7538-4724-bdde-e3920b91d390\" (UID: \"c4396219-7538-4724-bdde-e3920b91d390\") " Feb 23 11:00:03 crc kubenswrapper[4904]: I0223 11:00:03.892674 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4396219-7538-4724-bdde-e3920b91d390-config-volume\") pod \"c4396219-7538-4724-bdde-e3920b91d390\" (UID: \"c4396219-7538-4724-bdde-e3920b91d390\") " Feb 23 11:00:03 crc kubenswrapper[4904]: I0223 11:00:03.893473 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4396219-7538-4724-bdde-e3920b91d390-config-volume" (OuterVolumeSpecName: "config-volume") pod "c4396219-7538-4724-bdde-e3920b91d390" (UID: "c4396219-7538-4724-bdde-e3920b91d390"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 11:00:03 crc kubenswrapper[4904]: I0223 11:00:03.894246 4904 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4396219-7538-4724-bdde-e3920b91d390-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 11:00:03 crc kubenswrapper[4904]: I0223 11:00:03.902330 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4396219-7538-4724-bdde-e3920b91d390-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c4396219-7538-4724-bdde-e3920b91d390" (UID: "c4396219-7538-4724-bdde-e3920b91d390"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 11:00:03 crc kubenswrapper[4904]: I0223 11:00:03.902352 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4396219-7538-4724-bdde-e3920b91d390-kube-api-access-4sd62" (OuterVolumeSpecName: "kube-api-access-4sd62") pod "c4396219-7538-4724-bdde-e3920b91d390" (UID: "c4396219-7538-4724-bdde-e3920b91d390"). InnerVolumeSpecName "kube-api-access-4sd62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 11:00:03 crc kubenswrapper[4904]: I0223 11:00:03.996483 4904 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4396219-7538-4724-bdde-e3920b91d390-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 11:00:03 crc kubenswrapper[4904]: I0223 11:00:03.996679 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sd62\" (UniqueName: \"kubernetes.io/projected/c4396219-7538-4724-bdde-e3920b91d390-kube-api-access-4sd62\") on node \"crc\" DevicePath \"\"" Feb 23 11:00:04 crc kubenswrapper[4904]: I0223 11:00:04.470699 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530740-54cjg" event={"ID":"c4396219-7538-4724-bdde-e3920b91d390","Type":"ContainerDied","Data":"dc8bb7db22fa592d8043df4e74b00ea67c5439664429706113ed4a82cbaa3cce"} Feb 23 11:00:04 crc kubenswrapper[4904]: I0223 11:00:04.470769 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530740-54cjg" Feb 23 11:00:04 crc kubenswrapper[4904]: I0223 11:00:04.470778 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc8bb7db22fa592d8043df4e74b00ea67c5439664429706113ed4a82cbaa3cce" Feb 23 11:00:04 crc kubenswrapper[4904]: I0223 11:00:04.547511 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530695-x77z4"] Feb 23 11:00:04 crc kubenswrapper[4904]: I0223 11:00:04.557897 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530695-x77z4"] Feb 23 11:00:05 crc kubenswrapper[4904]: I0223 11:00:05.278052 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19008559-f0c8-40ea-9898-0fcf1c21ef3c" path="/var/lib/kubelet/pods/19008559-f0c8-40ea-9898-0fcf1c21ef3c/volumes" Feb 23 11:00:15 crc kubenswrapper[4904]: I0223 11:00:15.255480 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 11:00:15 crc kubenswrapper[4904]: E0223 11:00:15.256193 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:00:27 crc kubenswrapper[4904]: I0223 11:00:27.270174 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 11:00:27 crc kubenswrapper[4904]: E0223 11:00:27.271628 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:00:40 crc kubenswrapper[4904]: I0223 11:00:40.255393 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 11:00:40 crc kubenswrapper[4904]: E0223 11:00:40.256449 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:00:46 crc kubenswrapper[4904]: I0223 11:00:46.735613 4904 scope.go:117] "RemoveContainer" containerID="46c86d7a497073f2c1577615e692da5a55e76e49fa839aee67ff31875e90c7ad" Feb 23 11:00:54 crc kubenswrapper[4904]: I0223 11:00:54.257018 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 11:00:55 crc kubenswrapper[4904]: I0223 11:00:55.038333 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerStarted","Data":"42925d3ee1d179ea60e0f55f2416554c8d91a37a7a505bfbf7feb9da06d8c0c0"} Feb 23 11:01:00 crc kubenswrapper[4904]: I0223 11:01:00.184050 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29530741-c4wkf"] Feb 23 11:01:00 crc kubenswrapper[4904]: E0223 11:01:00.185780 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4396219-7538-4724-bdde-e3920b91d390" containerName="collect-profiles" Feb 23 11:01:00 crc kubenswrapper[4904]: I0223 11:01:00.185818 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4396219-7538-4724-bdde-e3920b91d390" containerName="collect-profiles" Feb 23 11:01:00 crc kubenswrapper[4904]: I0223 11:01:00.186377 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4396219-7538-4724-bdde-e3920b91d390" containerName="collect-profiles" Feb 23 11:01:00 crc kubenswrapper[4904]: I0223 11:01:00.188056 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29530741-c4wkf" Feb 23 11:01:00 crc kubenswrapper[4904]: I0223 11:01:00.221020 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29530741-c4wkf"] Feb 23 11:01:00 crc kubenswrapper[4904]: I0223 11:01:00.334495 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnv26\" (UniqueName: \"kubernetes.io/projected/ea4a64d0-ed1f-4380-b948-fe750eb2c9af-kube-api-access-dnv26\") pod \"keystone-cron-29530741-c4wkf\" (UID: \"ea4a64d0-ed1f-4380-b948-fe750eb2c9af\") " pod="openstack/keystone-cron-29530741-c4wkf" Feb 23 11:01:00 crc kubenswrapper[4904]: I0223 11:01:00.334547 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea4a64d0-ed1f-4380-b948-fe750eb2c9af-config-data\") pod \"keystone-cron-29530741-c4wkf\" (UID: \"ea4a64d0-ed1f-4380-b948-fe750eb2c9af\") " pod="openstack/keystone-cron-29530741-c4wkf" Feb 23 11:01:00 crc kubenswrapper[4904]: I0223 11:01:00.334620 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea4a64d0-ed1f-4380-b948-fe750eb2c9af-combined-ca-bundle\") pod \"keystone-cron-29530741-c4wkf\" (UID: \"ea4a64d0-ed1f-4380-b948-fe750eb2c9af\") " pod="openstack/keystone-cron-29530741-c4wkf" Feb 23 11:01:00 crc kubenswrapper[4904]: I0223 11:01:00.335393 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea4a64d0-ed1f-4380-b948-fe750eb2c9af-fernet-keys\") pod \"keystone-cron-29530741-c4wkf\" (UID: \"ea4a64d0-ed1f-4380-b948-fe750eb2c9af\") " pod="openstack/keystone-cron-29530741-c4wkf" Feb 23 11:01:00 crc kubenswrapper[4904]: I0223 11:01:00.437335 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnv26\" (UniqueName: \"kubernetes.io/projected/ea4a64d0-ed1f-4380-b948-fe750eb2c9af-kube-api-access-dnv26\") pod \"keystone-cron-29530741-c4wkf\" (UID: \"ea4a64d0-ed1f-4380-b948-fe750eb2c9af\") " pod="openstack/keystone-cron-29530741-c4wkf" Feb 23 11:01:00 crc kubenswrapper[4904]: I0223 11:01:00.437392 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea4a64d0-ed1f-4380-b948-fe750eb2c9af-config-data\") pod \"keystone-cron-29530741-c4wkf\" (UID: \"ea4a64d0-ed1f-4380-b948-fe750eb2c9af\") " pod="openstack/keystone-cron-29530741-c4wkf" Feb 23 11:01:00 crc kubenswrapper[4904]: I0223 11:01:00.437500 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea4a64d0-ed1f-4380-b948-fe750eb2c9af-combined-ca-bundle\") pod \"keystone-cron-29530741-c4wkf\" (UID: \"ea4a64d0-ed1f-4380-b948-fe750eb2c9af\") " pod="openstack/keystone-cron-29530741-c4wkf" Feb 23 11:01:00 crc kubenswrapper[4904]: I0223 11:01:00.437559 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea4a64d0-ed1f-4380-b948-fe750eb2c9af-fernet-keys\") pod \"keystone-cron-29530741-c4wkf\" (UID: \"ea4a64d0-ed1f-4380-b948-fe750eb2c9af\") " pod="openstack/keystone-cron-29530741-c4wkf" Feb 23 11:01:00 crc kubenswrapper[4904]: I0223 11:01:00.444806 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea4a64d0-ed1f-4380-b948-fe750eb2c9af-config-data\") pod \"keystone-cron-29530741-c4wkf\" (UID: \"ea4a64d0-ed1f-4380-b948-fe750eb2c9af\") " pod="openstack/keystone-cron-29530741-c4wkf" Feb 23 11:01:00 crc kubenswrapper[4904]: I0223 11:01:00.445334 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea4a64d0-ed1f-4380-b948-fe750eb2c9af-fernet-keys\") pod \"keystone-cron-29530741-c4wkf\" (UID: \"ea4a64d0-ed1f-4380-b948-fe750eb2c9af\") " pod="openstack/keystone-cron-29530741-c4wkf" Feb 23 11:01:00 crc kubenswrapper[4904]: I0223 11:01:00.446214 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea4a64d0-ed1f-4380-b948-fe750eb2c9af-combined-ca-bundle\") pod \"keystone-cron-29530741-c4wkf\" (UID: \"ea4a64d0-ed1f-4380-b948-fe750eb2c9af\") " pod="openstack/keystone-cron-29530741-c4wkf" Feb 23 11:01:00 crc kubenswrapper[4904]: I0223 11:01:00.470045 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnv26\" (UniqueName: \"kubernetes.io/projected/ea4a64d0-ed1f-4380-b948-fe750eb2c9af-kube-api-access-dnv26\") pod \"keystone-cron-29530741-c4wkf\" (UID: \"ea4a64d0-ed1f-4380-b948-fe750eb2c9af\") " pod="openstack/keystone-cron-29530741-c4wkf" Feb 23 11:01:00 crc kubenswrapper[4904]: I0223 11:01:00.528458 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29530741-c4wkf" Feb 23 11:01:00 crc kubenswrapper[4904]: I0223 11:01:00.990150 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29530741-c4wkf"] Feb 23 11:01:00 crc kubenswrapper[4904]: W0223 11:01:00.995740 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea4a64d0_ed1f_4380_b948_fe750eb2c9af.slice/crio-ba247a0565ea129a3c39c96e1474f62f08f54ac0dca90258c74ff6da0cb83d71 WatchSource:0}: Error finding container ba247a0565ea129a3c39c96e1474f62f08f54ac0dca90258c74ff6da0cb83d71: Status 404 returned error can't find the container with id ba247a0565ea129a3c39c96e1474f62f08f54ac0dca90258c74ff6da0cb83d71 Feb 23 11:01:01 crc kubenswrapper[4904]: I0223 11:01:01.124785 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29530741-c4wkf" event={"ID":"ea4a64d0-ed1f-4380-b948-fe750eb2c9af","Type":"ContainerStarted","Data":"ba247a0565ea129a3c39c96e1474f62f08f54ac0dca90258c74ff6da0cb83d71"} Feb 23 11:01:02 crc kubenswrapper[4904]: I0223 11:01:02.136047 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29530741-c4wkf" event={"ID":"ea4a64d0-ed1f-4380-b948-fe750eb2c9af","Type":"ContainerStarted","Data":"787336f5b3e946e6d64cb6adcdff3e2d22eafcb7ffbc4e3cca95daa2ba4cd391"} Feb 23 11:01:02 crc kubenswrapper[4904]: I0223 11:01:02.151199 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29530741-c4wkf" podStartSLOduration=2.151179531 podStartE2EDuration="2.151179531s" podCreationTimestamp="2026-02-23 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 11:01:02.151021727 +0000 UTC m=+3295.571395240" watchObservedRunningTime="2026-02-23 11:01:02.151179531 +0000 UTC m=+3295.571553044" Feb 23 11:01:06 crc kubenswrapper[4904]: I0223 11:01:06.179909 4904 generic.go:334] "Generic (PLEG): container finished" podID="ea4a64d0-ed1f-4380-b948-fe750eb2c9af" containerID="787336f5b3e946e6d64cb6adcdff3e2d22eafcb7ffbc4e3cca95daa2ba4cd391" exitCode=0 Feb 23 11:01:06 crc kubenswrapper[4904]: I0223 11:01:06.180066 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29530741-c4wkf" event={"ID":"ea4a64d0-ed1f-4380-b948-fe750eb2c9af","Type":"ContainerDied","Data":"787336f5b3e946e6d64cb6adcdff3e2d22eafcb7ffbc4e3cca95daa2ba4cd391"} Feb 23 11:01:07 crc kubenswrapper[4904]: I0223 11:01:07.651790 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29530741-c4wkf" Feb 23 11:01:07 crc kubenswrapper[4904]: I0223 11:01:07.802746 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnv26\" (UniqueName: \"kubernetes.io/projected/ea4a64d0-ed1f-4380-b948-fe750eb2c9af-kube-api-access-dnv26\") pod \"ea4a64d0-ed1f-4380-b948-fe750eb2c9af\" (UID: \"ea4a64d0-ed1f-4380-b948-fe750eb2c9af\") " Feb 23 11:01:07 crc kubenswrapper[4904]: I0223 11:01:07.802827 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea4a64d0-ed1f-4380-b948-fe750eb2c9af-fernet-keys\") pod \"ea4a64d0-ed1f-4380-b948-fe750eb2c9af\" (UID: \"ea4a64d0-ed1f-4380-b948-fe750eb2c9af\") " Feb 23 11:01:07 crc kubenswrapper[4904]: I0223 11:01:07.802962 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea4a64d0-ed1f-4380-b948-fe750eb2c9af-config-data\") pod \"ea4a64d0-ed1f-4380-b948-fe750eb2c9af\" (UID: \"ea4a64d0-ed1f-4380-b948-fe750eb2c9af\") " Feb 23 11:01:07 crc kubenswrapper[4904]: I0223 11:01:07.803027 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea4a64d0-ed1f-4380-b948-fe750eb2c9af-combined-ca-bundle\") pod \"ea4a64d0-ed1f-4380-b948-fe750eb2c9af\" (UID: \"ea4a64d0-ed1f-4380-b948-fe750eb2c9af\") " Feb 23 11:01:07 crc kubenswrapper[4904]: I0223 11:01:07.809966 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea4a64d0-ed1f-4380-b948-fe750eb2c9af-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ea4a64d0-ed1f-4380-b948-fe750eb2c9af" (UID: "ea4a64d0-ed1f-4380-b948-fe750eb2c9af"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 11:01:07 crc kubenswrapper[4904]: I0223 11:01:07.811461 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea4a64d0-ed1f-4380-b948-fe750eb2c9af-kube-api-access-dnv26" (OuterVolumeSpecName: "kube-api-access-dnv26") pod "ea4a64d0-ed1f-4380-b948-fe750eb2c9af" (UID: "ea4a64d0-ed1f-4380-b948-fe750eb2c9af"). InnerVolumeSpecName "kube-api-access-dnv26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 11:01:07 crc kubenswrapper[4904]: I0223 11:01:07.862308 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea4a64d0-ed1f-4380-b948-fe750eb2c9af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea4a64d0-ed1f-4380-b948-fe750eb2c9af" (UID: "ea4a64d0-ed1f-4380-b948-fe750eb2c9af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 11:01:07 crc kubenswrapper[4904]: I0223 11:01:07.880034 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea4a64d0-ed1f-4380-b948-fe750eb2c9af-config-data" (OuterVolumeSpecName: "config-data") pod "ea4a64d0-ed1f-4380-b948-fe750eb2c9af" (UID: "ea4a64d0-ed1f-4380-b948-fe750eb2c9af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 11:01:07 crc kubenswrapper[4904]: I0223 11:01:07.906340 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea4a64d0-ed1f-4380-b948-fe750eb2c9af-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 11:01:07 crc kubenswrapper[4904]: I0223 11:01:07.906374 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea4a64d0-ed1f-4380-b948-fe750eb2c9af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 11:01:07 crc kubenswrapper[4904]: I0223 11:01:07.906384 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnv26\" (UniqueName: \"kubernetes.io/projected/ea4a64d0-ed1f-4380-b948-fe750eb2c9af-kube-api-access-dnv26\") on node \"crc\" DevicePath \"\"" Feb 23 11:01:07 crc kubenswrapper[4904]: I0223 11:01:07.906393 4904 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea4a64d0-ed1f-4380-b948-fe750eb2c9af-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 23 11:01:08 crc kubenswrapper[4904]: I0223 11:01:08.215255 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29530741-c4wkf" event={"ID":"ea4a64d0-ed1f-4380-b948-fe750eb2c9af","Type":"ContainerDied","Data":"ba247a0565ea129a3c39c96e1474f62f08f54ac0dca90258c74ff6da0cb83d71"} Feb 23 11:01:08 crc kubenswrapper[4904]: I0223 11:01:08.215768 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba247a0565ea129a3c39c96e1474f62f08f54ac0dca90258c74ff6da0cb83d71" Feb 23 11:01:08 crc kubenswrapper[4904]: I0223 11:01:08.215777 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29530741-c4wkf" Feb 23 11:03:17 crc kubenswrapper[4904]: I0223 11:03:17.398650 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 11:03:17 crc kubenswrapper[4904]: I0223 11:03:17.399398 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 11:03:47 crc kubenswrapper[4904]: I0223 11:03:47.398808 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 11:03:47 crc kubenswrapper[4904]: I0223 11:03:47.399535 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 11:04:17 crc kubenswrapper[4904]: I0223 11:04:17.398006 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 11:04:17 crc kubenswrapper[4904]: I0223 11:04:17.398462 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 11:04:17 crc kubenswrapper[4904]: I0223 11:04:17.398504 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 11:04:17 crc kubenswrapper[4904]: I0223 11:04:17.399026 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"42925d3ee1d179ea60e0f55f2416554c8d91a37a7a505bfbf7feb9da06d8c0c0"} pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 11:04:17 crc kubenswrapper[4904]: I0223 11:04:17.399073 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" containerID="cri-o://42925d3ee1d179ea60e0f55f2416554c8d91a37a7a505bfbf7feb9da06d8c0c0" gracePeriod=600 Feb 23 11:04:18 crc kubenswrapper[4904]: I0223 11:04:18.461306 4904 generic.go:334] "Generic (PLEG): container finished" podID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerID="42925d3ee1d179ea60e0f55f2416554c8d91a37a7a505bfbf7feb9da06d8c0c0" exitCode=0 Feb 23 11:04:18 crc kubenswrapper[4904]: I0223 11:04:18.461359 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerDied","Data":"42925d3ee1d179ea60e0f55f2416554c8d91a37a7a505bfbf7feb9da06d8c0c0"} Feb 23 11:04:18 crc kubenswrapper[4904]: I0223 11:04:18.461802 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerStarted","Data":"5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23"} Feb 23 11:04:18 crc kubenswrapper[4904]: I0223 11:04:18.461833 4904 scope.go:117] "RemoveContainer" containerID="7c0298605678bfe37634e71af28dce6057a999d842a608f53e5e7ac39b35441d" Feb 23 11:05:22 crc kubenswrapper[4904]: I0223 11:05:22.498180 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8t2kf"] Feb 23 11:05:22 crc kubenswrapper[4904]: E0223 11:05:22.499747 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4a64d0-ed1f-4380-b948-fe750eb2c9af" containerName="keystone-cron" Feb 23 11:05:22 crc kubenswrapper[4904]: I0223 11:05:22.499773 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4a64d0-ed1f-4380-b948-fe750eb2c9af" containerName="keystone-cron" Feb 23 11:05:22 crc kubenswrapper[4904]: I0223 11:05:22.501211 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4a64d0-ed1f-4380-b948-fe750eb2c9af" containerName="keystone-cron" Feb 23 11:05:22 crc kubenswrapper[4904]: I0223 11:05:22.505397 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8t2kf" Feb 23 11:05:22 crc kubenswrapper[4904]: I0223 11:05:22.511324 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8t2kf"] Feb 23 11:05:22 crc kubenswrapper[4904]: I0223 11:05:22.635342 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82ea5890-dd3b-4d82-b1f4-8c7626569520-utilities\") pod \"certified-operators-8t2kf\" (UID: \"82ea5890-dd3b-4d82-b1f4-8c7626569520\") " pod="openshift-marketplace/certified-operators-8t2kf" Feb 23 11:05:22 crc kubenswrapper[4904]: I0223 11:05:22.635408 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82ea5890-dd3b-4d82-b1f4-8c7626569520-catalog-content\") pod \"certified-operators-8t2kf\" (UID: \"82ea5890-dd3b-4d82-b1f4-8c7626569520\") " pod="openshift-marketplace/certified-operators-8t2kf" Feb 23 11:05:22 crc kubenswrapper[4904]: I0223 11:05:22.635862 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrh2c\" (UniqueName: \"kubernetes.io/projected/82ea5890-dd3b-4d82-b1f4-8c7626569520-kube-api-access-qrh2c\") pod \"certified-operators-8t2kf\" (UID: \"82ea5890-dd3b-4d82-b1f4-8c7626569520\") " pod="openshift-marketplace/certified-operators-8t2kf" Feb 23 11:05:22 crc kubenswrapper[4904]: I0223 11:05:22.754344 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrh2c\" (UniqueName: \"kubernetes.io/projected/82ea5890-dd3b-4d82-b1f4-8c7626569520-kube-api-access-qrh2c\") pod \"certified-operators-8t2kf\" (UID: \"82ea5890-dd3b-4d82-b1f4-8c7626569520\") " pod="openshift-marketplace/certified-operators-8t2kf" Feb 23 11:05:22 crc kubenswrapper[4904]: I0223 11:05:22.754541 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82ea5890-dd3b-4d82-b1f4-8c7626569520-utilities\") pod \"certified-operators-8t2kf\" (UID: \"82ea5890-dd3b-4d82-b1f4-8c7626569520\") " pod="openshift-marketplace/certified-operators-8t2kf" Feb 23 11:05:22 crc kubenswrapper[4904]: I0223 11:05:22.754584 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82ea5890-dd3b-4d82-b1f4-8c7626569520-catalog-content\") pod \"certified-operators-8t2kf\" (UID: \"82ea5890-dd3b-4d82-b1f4-8c7626569520\") " pod="openshift-marketplace/certified-operators-8t2kf" Feb 23 11:05:22 crc kubenswrapper[4904]: I0223 11:05:22.755942 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82ea5890-dd3b-4d82-b1f4-8c7626569520-utilities\") pod \"certified-operators-8t2kf\" (UID: \"82ea5890-dd3b-4d82-b1f4-8c7626569520\") " pod="openshift-marketplace/certified-operators-8t2kf" Feb 23 11:05:22 crc kubenswrapper[4904]: I0223 11:05:22.755965 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82ea5890-dd3b-4d82-b1f4-8c7626569520-catalog-content\") pod \"certified-operators-8t2kf\" (UID: \"82ea5890-dd3b-4d82-b1f4-8c7626569520\") " pod="openshift-marketplace/certified-operators-8t2kf" Feb 23 11:05:22 crc kubenswrapper[4904]: I0223 11:05:22.781043 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrh2c\" (UniqueName: \"kubernetes.io/projected/82ea5890-dd3b-4d82-b1f4-8c7626569520-kube-api-access-qrh2c\") pod \"certified-operators-8t2kf\" (UID: \"82ea5890-dd3b-4d82-b1f4-8c7626569520\") " pod="openshift-marketplace/certified-operators-8t2kf" Feb 23 11:05:22 crc kubenswrapper[4904]: I0223 11:05:22.826459 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8t2kf" Feb 23 11:05:23 crc kubenswrapper[4904]: I0223 11:05:23.359211 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8t2kf"] Feb 23 11:05:24 crc kubenswrapper[4904]: I0223 11:05:24.169464 4904 generic.go:334] "Generic (PLEG): container finished" podID="82ea5890-dd3b-4d82-b1f4-8c7626569520" containerID="ff72e3bfb6301358c50173087d626b168d7dc3644e5d9d3cf19d17bab1930e24" exitCode=0 Feb 23 11:05:24 crc kubenswrapper[4904]: I0223 11:05:24.169557 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8t2kf" event={"ID":"82ea5890-dd3b-4d82-b1f4-8c7626569520","Type":"ContainerDied","Data":"ff72e3bfb6301358c50173087d626b168d7dc3644e5d9d3cf19d17bab1930e24"} Feb 23 11:05:24 crc kubenswrapper[4904]: I0223 11:05:24.170085 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8t2kf" event={"ID":"82ea5890-dd3b-4d82-b1f4-8c7626569520","Type":"ContainerStarted","Data":"9fa13f73ea31521020edff5f9cf914ca23d9b3c1497171dfda2b71b4332ba118"} Feb 23 11:05:24 crc kubenswrapper[4904]: I0223 11:05:24.172968 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 11:05:26 crc kubenswrapper[4904]: I0223 11:05:26.197362 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8t2kf" event={"ID":"82ea5890-dd3b-4d82-b1f4-8c7626569520","Type":"ContainerStarted","Data":"d812cca6482c653fa47ef4d2e94fd67e489a915f00c01dd239bd312406413927"} Feb 23 11:05:28 crc kubenswrapper[4904]: I0223 11:05:28.217042 4904 generic.go:334] "Generic (PLEG): container finished" podID="82ea5890-dd3b-4d82-b1f4-8c7626569520" containerID="d812cca6482c653fa47ef4d2e94fd67e489a915f00c01dd239bd312406413927" exitCode=0 Feb 23 11:05:28 crc kubenswrapper[4904]: I0223 11:05:28.217237 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8t2kf" event={"ID":"82ea5890-dd3b-4d82-b1f4-8c7626569520","Type":"ContainerDied","Data":"d812cca6482c653fa47ef4d2e94fd67e489a915f00c01dd239bd312406413927"} Feb 23 11:05:29 crc kubenswrapper[4904]: I0223 11:05:29.234075 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8t2kf" event={"ID":"82ea5890-dd3b-4d82-b1f4-8c7626569520","Type":"ContainerStarted","Data":"82fca9cb71763c78c2f7132ca4dc0f704fb63c858bff8a43d0c04591d3239caf"} Feb 23 11:05:29 crc kubenswrapper[4904]: I0223 11:05:29.269730 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8t2kf" podStartSLOduration=2.84370705 podStartE2EDuration="7.269694846s" podCreationTimestamp="2026-02-23 11:05:22 +0000 UTC" firstStartedPulling="2026-02-23 11:05:24.172686876 +0000 UTC m=+3557.593060399" lastFinishedPulling="2026-02-23 11:05:28.598674682 +0000 UTC m=+3562.019048195" observedRunningTime="2026-02-23 11:05:29.268689077 +0000 UTC m=+3562.689062600" watchObservedRunningTime="2026-02-23 11:05:29.269694846 +0000 UTC m=+3562.690068359" Feb 23 11:05:32 crc kubenswrapper[4904]: I0223 11:05:32.827360 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8t2kf" Feb 23 11:05:32 crc kubenswrapper[4904]: I0223 11:05:32.828043 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8t2kf" Feb 23 11:05:32 crc kubenswrapper[4904]: I0223 11:05:32.907404 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8t2kf" Feb 23 11:05:33 crc kubenswrapper[4904]: I0223 11:05:33.325545 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8t2kf" Feb 23 11:05:34 crc kubenswrapper[4904]: I0223 11:05:34.669001 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8t2kf"] Feb 23 11:05:36 crc kubenswrapper[4904]: I0223 11:05:36.297641 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8t2kf" podUID="82ea5890-dd3b-4d82-b1f4-8c7626569520" containerName="registry-server" containerID="cri-o://82fca9cb71763c78c2f7132ca4dc0f704fb63c858bff8a43d0c04591d3239caf" gracePeriod=2 Feb 23 11:05:36 crc kubenswrapper[4904]: I0223 11:05:36.723384 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8t2kf" Feb 23 11:05:36 crc kubenswrapper[4904]: I0223 11:05:36.860111 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82ea5890-dd3b-4d82-b1f4-8c7626569520-utilities\") pod \"82ea5890-dd3b-4d82-b1f4-8c7626569520\" (UID: \"82ea5890-dd3b-4d82-b1f4-8c7626569520\") " Feb 23 11:05:36 crc kubenswrapper[4904]: I0223 11:05:36.860312 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82ea5890-dd3b-4d82-b1f4-8c7626569520-catalog-content\") pod \"82ea5890-dd3b-4d82-b1f4-8c7626569520\" (UID: \"82ea5890-dd3b-4d82-b1f4-8c7626569520\") " Feb 23 11:05:36 crc kubenswrapper[4904]: I0223 11:05:36.860405 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrh2c\" (UniqueName: \"kubernetes.io/projected/82ea5890-dd3b-4d82-b1f4-8c7626569520-kube-api-access-qrh2c\") pod \"82ea5890-dd3b-4d82-b1f4-8c7626569520\" (UID: \"82ea5890-dd3b-4d82-b1f4-8c7626569520\") " Feb 23 11:05:36 crc kubenswrapper[4904]: I0223 11:05:36.861202 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82ea5890-dd3b-4d82-b1f4-8c7626569520-utilities" (OuterVolumeSpecName: "utilities") pod "82ea5890-dd3b-4d82-b1f4-8c7626569520" (UID: "82ea5890-dd3b-4d82-b1f4-8c7626569520"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:05:36 crc kubenswrapper[4904]: I0223 11:05:36.868985 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82ea5890-dd3b-4d82-b1f4-8c7626569520-kube-api-access-qrh2c" (OuterVolumeSpecName: "kube-api-access-qrh2c") pod "82ea5890-dd3b-4d82-b1f4-8c7626569520" (UID: "82ea5890-dd3b-4d82-b1f4-8c7626569520"). InnerVolumeSpecName "kube-api-access-qrh2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 11:05:36 crc kubenswrapper[4904]: I0223 11:05:36.917099 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82ea5890-dd3b-4d82-b1f4-8c7626569520-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82ea5890-dd3b-4d82-b1f4-8c7626569520" (UID: "82ea5890-dd3b-4d82-b1f4-8c7626569520"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:05:36 crc kubenswrapper[4904]: I0223 11:05:36.962766 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82ea5890-dd3b-4d82-b1f4-8c7626569520-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 11:05:36 crc kubenswrapper[4904]: I0223 11:05:36.962810 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82ea5890-dd3b-4d82-b1f4-8c7626569520-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 11:05:36 crc kubenswrapper[4904]: I0223 11:05:36.962825 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrh2c\" (UniqueName: \"kubernetes.io/projected/82ea5890-dd3b-4d82-b1f4-8c7626569520-kube-api-access-qrh2c\") on node \"crc\" DevicePath \"\"" Feb 23 11:05:37 crc kubenswrapper[4904]: I0223 11:05:37.310358 4904 generic.go:334] "Generic (PLEG): container finished" podID="82ea5890-dd3b-4d82-b1f4-8c7626569520" containerID="82fca9cb71763c78c2f7132ca4dc0f704fb63c858bff8a43d0c04591d3239caf" exitCode=0 Feb 23 11:05:37 crc kubenswrapper[4904]: I0223 11:05:37.310473 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8t2kf" event={"ID":"82ea5890-dd3b-4d82-b1f4-8c7626569520","Type":"ContainerDied","Data":"82fca9cb71763c78c2f7132ca4dc0f704fb63c858bff8a43d0c04591d3239caf"} Feb 23 11:05:37 crc kubenswrapper[4904]: I0223 11:05:37.310499 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8t2kf" Feb 23 11:05:37 crc kubenswrapper[4904]: I0223 11:05:37.312206 4904 scope.go:117] "RemoveContainer" containerID="82fca9cb71763c78c2f7132ca4dc0f704fb63c858bff8a43d0c04591d3239caf" Feb 23 11:05:37 crc kubenswrapper[4904]: I0223 11:05:37.312023 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8t2kf" event={"ID":"82ea5890-dd3b-4d82-b1f4-8c7626569520","Type":"ContainerDied","Data":"9fa13f73ea31521020edff5f9cf914ca23d9b3c1497171dfda2b71b4332ba118"} Feb 23 11:05:37 crc kubenswrapper[4904]: I0223 11:05:37.347203 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8t2kf"] Feb 23 11:05:37 crc kubenswrapper[4904]: I0223 11:05:37.349408 4904 scope.go:117] "RemoveContainer" containerID="d812cca6482c653fa47ef4d2e94fd67e489a915f00c01dd239bd312406413927" Feb 23 11:05:37 crc kubenswrapper[4904]: I0223 11:05:37.361122 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8t2kf"] Feb 23 11:05:37 crc kubenswrapper[4904]: I0223 11:05:37.372762 4904 scope.go:117] "RemoveContainer" containerID="ff72e3bfb6301358c50173087d626b168d7dc3644e5d9d3cf19d17bab1930e24" Feb 23 11:05:37 crc kubenswrapper[4904]: I0223 11:05:37.447597 4904 scope.go:117] "RemoveContainer" containerID="82fca9cb71763c78c2f7132ca4dc0f704fb63c858bff8a43d0c04591d3239caf" Feb 23 11:05:37 crc kubenswrapper[4904]: E0223 11:05:37.448239 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82fca9cb71763c78c2f7132ca4dc0f704fb63c858bff8a43d0c04591d3239caf\": container with ID starting with 82fca9cb71763c78c2f7132ca4dc0f704fb63c858bff8a43d0c04591d3239caf not found: ID does not exist" containerID="82fca9cb71763c78c2f7132ca4dc0f704fb63c858bff8a43d0c04591d3239caf" Feb 23 11:05:37 crc kubenswrapper[4904]: I0223 11:05:37.448277 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82fca9cb71763c78c2f7132ca4dc0f704fb63c858bff8a43d0c04591d3239caf"} err="failed to get container status \"82fca9cb71763c78c2f7132ca4dc0f704fb63c858bff8a43d0c04591d3239caf\": rpc error: code = NotFound desc = could not find container \"82fca9cb71763c78c2f7132ca4dc0f704fb63c858bff8a43d0c04591d3239caf\": container with ID starting with 82fca9cb71763c78c2f7132ca4dc0f704fb63c858bff8a43d0c04591d3239caf not found: ID does not exist" Feb 23 11:05:37 crc kubenswrapper[4904]: I0223 11:05:37.448306 4904 scope.go:117] "RemoveContainer" containerID="d812cca6482c653fa47ef4d2e94fd67e489a915f00c01dd239bd312406413927" Feb 23 11:05:37 crc kubenswrapper[4904]: E0223 11:05:37.448815 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d812cca6482c653fa47ef4d2e94fd67e489a915f00c01dd239bd312406413927\": container with ID starting with d812cca6482c653fa47ef4d2e94fd67e489a915f00c01dd239bd312406413927 not found: ID does not exist" containerID="d812cca6482c653fa47ef4d2e94fd67e489a915f00c01dd239bd312406413927" Feb 23 11:05:37 crc kubenswrapper[4904]: I0223 11:05:37.448904 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d812cca6482c653fa47ef4d2e94fd67e489a915f00c01dd239bd312406413927"} err="failed to get container status \"d812cca6482c653fa47ef4d2e94fd67e489a915f00c01dd239bd312406413927\": rpc error: code = NotFound desc = could not find container \"d812cca6482c653fa47ef4d2e94fd67e489a915f00c01dd239bd312406413927\": container with ID starting with d812cca6482c653fa47ef4d2e94fd67e489a915f00c01dd239bd312406413927 not found: ID does not exist" Feb 23 11:05:37 crc kubenswrapper[4904]: I0223 11:05:37.448955 4904 scope.go:117] "RemoveContainer" containerID="ff72e3bfb6301358c50173087d626b168d7dc3644e5d9d3cf19d17bab1930e24" Feb 23 11:05:37 crc kubenswrapper[4904]: E0223 11:05:37.449425 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff72e3bfb6301358c50173087d626b168d7dc3644e5d9d3cf19d17bab1930e24\": container with ID starting with ff72e3bfb6301358c50173087d626b168d7dc3644e5d9d3cf19d17bab1930e24 not found: ID does not exist" containerID="ff72e3bfb6301358c50173087d626b168d7dc3644e5d9d3cf19d17bab1930e24" Feb 23 11:05:37 crc kubenswrapper[4904]: I0223 11:05:37.449468 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff72e3bfb6301358c50173087d626b168d7dc3644e5d9d3cf19d17bab1930e24"} err="failed to get container status \"ff72e3bfb6301358c50173087d626b168d7dc3644e5d9d3cf19d17bab1930e24\": rpc error: code = NotFound desc = could not find container \"ff72e3bfb6301358c50173087d626b168d7dc3644e5d9d3cf19d17bab1930e24\": container with ID starting with ff72e3bfb6301358c50173087d626b168d7dc3644e5d9d3cf19d17bab1930e24 not found: ID does not exist" Feb 23 11:05:39 crc kubenswrapper[4904]: I0223 11:05:39.270966 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82ea5890-dd3b-4d82-b1f4-8c7626569520" path="/var/lib/kubelet/pods/82ea5890-dd3b-4d82-b1f4-8c7626569520/volumes" Feb 23 11:06:17 crc kubenswrapper[4904]: I0223 11:06:17.399033 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 11:06:17 crc kubenswrapper[4904]: I0223 11:06:17.399646 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 11:06:28 crc kubenswrapper[4904]: I0223 11:06:28.124583 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lz4f7"] Feb 23 11:06:28 crc kubenswrapper[4904]: E0223 11:06:28.125545 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82ea5890-dd3b-4d82-b1f4-8c7626569520" containerName="extract-utilities" Feb 23 11:06:28 crc kubenswrapper[4904]: I0223 11:06:28.125561 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="82ea5890-dd3b-4d82-b1f4-8c7626569520" containerName="extract-utilities" Feb 23 11:06:28 crc kubenswrapper[4904]: E0223 11:06:28.125587 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82ea5890-dd3b-4d82-b1f4-8c7626569520" containerName="registry-server" Feb 23 11:06:28 crc kubenswrapper[4904]: I0223 11:06:28.125595 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="82ea5890-dd3b-4d82-b1f4-8c7626569520" containerName="registry-server" Feb 23 11:06:28 crc kubenswrapper[4904]: E0223 11:06:28.125618 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82ea5890-dd3b-4d82-b1f4-8c7626569520" containerName="extract-content" Feb 23 11:06:28 crc kubenswrapper[4904]: I0223 11:06:28.125625 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="82ea5890-dd3b-4d82-b1f4-8c7626569520" containerName="extract-content" Feb 23 11:06:28 crc kubenswrapper[4904]: I0223 11:06:28.125879 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="82ea5890-dd3b-4d82-b1f4-8c7626569520" containerName="registry-server" Feb 23 11:06:28 crc kubenswrapper[4904]: I0223 11:06:28.127464 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lz4f7" Feb 23 11:06:28 crc kubenswrapper[4904]: I0223 11:06:28.184142 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lz4f7"] Feb 23 11:06:28 crc kubenswrapper[4904]: I0223 11:06:28.208390 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a3c2f04-ff0d-4dd0-8e15-7b544e443b85-utilities\") pod \"redhat-operators-lz4f7\" (UID: \"3a3c2f04-ff0d-4dd0-8e15-7b544e443b85\") " pod="openshift-marketplace/redhat-operators-lz4f7" Feb 23 11:06:28 crc kubenswrapper[4904]: I0223 11:06:28.208497 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a3c2f04-ff0d-4dd0-8e15-7b544e443b85-catalog-content\") pod \"redhat-operators-lz4f7\" (UID: \"3a3c2f04-ff0d-4dd0-8e15-7b544e443b85\") " pod="openshift-marketplace/redhat-operators-lz4f7" Feb 23 11:06:28 crc kubenswrapper[4904]: I0223 11:06:28.208567 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrl96\" (UniqueName: \"kubernetes.io/projected/3a3c2f04-ff0d-4dd0-8e15-7b544e443b85-kube-api-access-mrl96\") pod \"redhat-operators-lz4f7\" (UID: \"3a3c2f04-ff0d-4dd0-8e15-7b544e443b85\") " pod="openshift-marketplace/redhat-operators-lz4f7" Feb 23 11:06:28 crc kubenswrapper[4904]: I0223 11:06:28.310335 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a3c2f04-ff0d-4dd0-8e15-7b544e443b85-utilities\") pod \"redhat-operators-lz4f7\" (UID: \"3a3c2f04-ff0d-4dd0-8e15-7b544e443b85\") " pod="openshift-marketplace/redhat-operators-lz4f7" Feb 23 11:06:28 crc kubenswrapper[4904]: I0223 11:06:28.310681 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a3c2f04-ff0d-4dd0-8e15-7b544e443b85-catalog-content\") pod \"redhat-operators-lz4f7\" (UID: \"3a3c2f04-ff0d-4dd0-8e15-7b544e443b85\") " pod="openshift-marketplace/redhat-operators-lz4f7" Feb 23 11:06:28 crc kubenswrapper[4904]: I0223 11:06:28.310770 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrl96\" (UniqueName: \"kubernetes.io/projected/3a3c2f04-ff0d-4dd0-8e15-7b544e443b85-kube-api-access-mrl96\") pod \"redhat-operators-lz4f7\" (UID: \"3a3c2f04-ff0d-4dd0-8e15-7b544e443b85\") " pod="openshift-marketplace/redhat-operators-lz4f7" Feb 23 11:06:28 crc kubenswrapper[4904]: I0223 11:06:28.311005 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a3c2f04-ff0d-4dd0-8e15-7b544e443b85-catalog-content\") pod \"redhat-operators-lz4f7\" (UID: \"3a3c2f04-ff0d-4dd0-8e15-7b544e443b85\") " pod="openshift-marketplace/redhat-operators-lz4f7" Feb 23 11:06:28 crc kubenswrapper[4904]: I0223 11:06:28.311389 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a3c2f04-ff0d-4dd0-8e15-7b544e443b85-utilities\") pod \"redhat-operators-lz4f7\" (UID: \"3a3c2f04-ff0d-4dd0-8e15-7b544e443b85\") " pod="openshift-marketplace/redhat-operators-lz4f7" Feb 23 11:06:28 crc kubenswrapper[4904]: I0223 11:06:28.331867 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrl96\" (UniqueName: \"kubernetes.io/projected/3a3c2f04-ff0d-4dd0-8e15-7b544e443b85-kube-api-access-mrl96\") pod \"redhat-operators-lz4f7\" (UID: \"3a3c2f04-ff0d-4dd0-8e15-7b544e443b85\") " pod="openshift-marketplace/redhat-operators-lz4f7" Feb 23 11:06:28 crc kubenswrapper[4904]: I0223 11:06:28.485656 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lz4f7" Feb 23 11:06:28 crc kubenswrapper[4904]: I0223 11:06:28.979249 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lz4f7"] Feb 23 11:06:29 crc kubenswrapper[4904]: I0223 11:06:29.239950 4904 generic.go:334] "Generic (PLEG): container finished" podID="3a3c2f04-ff0d-4dd0-8e15-7b544e443b85" containerID="71c617bb05b11d9d0e1b73483c7a1377a5ca404db622f813fe046f770e4a2c1e" exitCode=0 Feb 23 11:06:29 crc kubenswrapper[4904]: I0223 11:06:29.240139 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lz4f7" event={"ID":"3a3c2f04-ff0d-4dd0-8e15-7b544e443b85","Type":"ContainerDied","Data":"71c617bb05b11d9d0e1b73483c7a1377a5ca404db622f813fe046f770e4a2c1e"} Feb 23 11:06:29 crc kubenswrapper[4904]: I0223 11:06:29.240283 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lz4f7" event={"ID":"3a3c2f04-ff0d-4dd0-8e15-7b544e443b85","Type":"ContainerStarted","Data":"f71e080bca5e27838fc1afa26074328dfc93e08e05a841795ac27f4d372e068c"} Feb 23 11:06:30 crc kubenswrapper[4904]: I0223 11:06:30.253585 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lz4f7" event={"ID":"3a3c2f04-ff0d-4dd0-8e15-7b544e443b85","Type":"ContainerStarted","Data":"aa42c93b42d271e2876f9141e4accda764602323f6874fa839a3a6fed533871e"} Feb 23 11:06:37 crc kubenswrapper[4904]: I0223 11:06:37.340517 4904 generic.go:334] "Generic (PLEG): container finished" podID="3a3c2f04-ff0d-4dd0-8e15-7b544e443b85" containerID="aa42c93b42d271e2876f9141e4accda764602323f6874fa839a3a6fed533871e" exitCode=0 Feb 23 11:06:37 crc kubenswrapper[4904]: I0223 11:06:37.340653 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lz4f7" event={"ID":"3a3c2f04-ff0d-4dd0-8e15-7b544e443b85","Type":"ContainerDied","Data":"aa42c93b42d271e2876f9141e4accda764602323f6874fa839a3a6fed533871e"} Feb 23 11:06:38 crc kubenswrapper[4904]: I0223 11:06:38.351669 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lz4f7" event={"ID":"3a3c2f04-ff0d-4dd0-8e15-7b544e443b85","Type":"ContainerStarted","Data":"bd6b8b98f651e2ddd26dcde13325694c28f5277160963fad6aa29a99df6d952c"} Feb 23 11:06:38 crc kubenswrapper[4904]: I0223 11:06:38.380367 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lz4f7" podStartSLOduration=1.891691646 podStartE2EDuration="10.380347731s" podCreationTimestamp="2026-02-23 11:06:28 +0000 UTC" firstStartedPulling="2026-02-23 11:06:29.242844013 +0000 UTC m=+3622.663217536" lastFinishedPulling="2026-02-23 11:06:37.731500098 +0000 UTC m=+3631.151873621" observedRunningTime="2026-02-23 11:06:38.371822418 +0000 UTC m=+3631.792195951" watchObservedRunningTime="2026-02-23 11:06:38.380347731 +0000 UTC m=+3631.800721254" Feb 23 11:06:38 crc kubenswrapper[4904]: I0223 11:06:38.486700 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lz4f7" Feb 23 11:06:38 crc kubenswrapper[4904]: I0223 11:06:38.486785 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lz4f7" Feb 23 11:06:39 crc kubenswrapper[4904]: I0223 11:06:39.562921 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lz4f7" podUID="3a3c2f04-ff0d-4dd0-8e15-7b544e443b85" containerName="registry-server" probeResult="failure" output=< Feb 23 11:06:39 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Feb 23 11:06:39 crc kubenswrapper[4904]: > Feb 23 11:06:47 crc kubenswrapper[4904]: I0223 11:06:47.398232 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 11:06:47 crc kubenswrapper[4904]: I0223 11:06:47.398764 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 11:06:48 crc kubenswrapper[4904]: I0223 11:06:48.577958 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lz4f7" Feb 23 11:06:48 crc kubenswrapper[4904]: I0223 11:06:48.659431 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lz4f7" Feb 23 11:06:48 crc kubenswrapper[4904]: I0223 11:06:48.842776 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lz4f7"] Feb 23 11:06:50 crc kubenswrapper[4904]: I0223 11:06:50.471555 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lz4f7" podUID="3a3c2f04-ff0d-4dd0-8e15-7b544e443b85" containerName="registry-server" containerID="cri-o://bd6b8b98f651e2ddd26dcde13325694c28f5277160963fad6aa29a99df6d952c" gracePeriod=2 Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.132616 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lz4f7" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.236791 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrl96\" (UniqueName: \"kubernetes.io/projected/3a3c2f04-ff0d-4dd0-8e15-7b544e443b85-kube-api-access-mrl96\") pod \"3a3c2f04-ff0d-4dd0-8e15-7b544e443b85\" (UID: \"3a3c2f04-ff0d-4dd0-8e15-7b544e443b85\") " Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.236833 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a3c2f04-ff0d-4dd0-8e15-7b544e443b85-catalog-content\") pod \"3a3c2f04-ff0d-4dd0-8e15-7b544e443b85\" (UID: \"3a3c2f04-ff0d-4dd0-8e15-7b544e443b85\") " Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.236885 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a3c2f04-ff0d-4dd0-8e15-7b544e443b85-utilities\") pod \"3a3c2f04-ff0d-4dd0-8e15-7b544e443b85\" (UID: \"3a3c2f04-ff0d-4dd0-8e15-7b544e443b85\") " Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.237998 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a3c2f04-ff0d-4dd0-8e15-7b544e443b85-utilities" (OuterVolumeSpecName: "utilities") pod "3a3c2f04-ff0d-4dd0-8e15-7b544e443b85" (UID: "3a3c2f04-ff0d-4dd0-8e15-7b544e443b85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.245660 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a3c2f04-ff0d-4dd0-8e15-7b544e443b85-kube-api-access-mrl96" (OuterVolumeSpecName: "kube-api-access-mrl96") pod "3a3c2f04-ff0d-4dd0-8e15-7b544e443b85" (UID: "3a3c2f04-ff0d-4dd0-8e15-7b544e443b85"). InnerVolumeSpecName "kube-api-access-mrl96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.251523 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l2hfw"] Feb 23 11:06:51 crc kubenswrapper[4904]: E0223 11:06:51.252140 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a3c2f04-ff0d-4dd0-8e15-7b544e443b85" containerName="extract-content" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.252168 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a3c2f04-ff0d-4dd0-8e15-7b544e443b85" containerName="extract-content" Feb 23 11:06:51 crc kubenswrapper[4904]: E0223 11:06:51.252217 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a3c2f04-ff0d-4dd0-8e15-7b544e443b85" containerName="extract-utilities" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.252229 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a3c2f04-ff0d-4dd0-8e15-7b544e443b85" containerName="extract-utilities" Feb 23 11:06:51 crc kubenswrapper[4904]: E0223 11:06:51.252261 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a3c2f04-ff0d-4dd0-8e15-7b544e443b85" containerName="registry-server" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.252271 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a3c2f04-ff0d-4dd0-8e15-7b544e443b85" containerName="registry-server" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.252616 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a3c2f04-ff0d-4dd0-8e15-7b544e443b85" containerName="registry-server" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.254924 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2hfw" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.313811 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l2hfw"] Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.339339 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ccef794-fac7-4be7-94f9-d1fdc086be40-catalog-content\") pod \"community-operators-l2hfw\" (UID: \"2ccef794-fac7-4be7-94f9-d1fdc086be40\") " pod="openshift-marketplace/community-operators-l2hfw" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.339388 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mjst\" (UniqueName: \"kubernetes.io/projected/2ccef794-fac7-4be7-94f9-d1fdc086be40-kube-api-access-8mjst\") pod \"community-operators-l2hfw\" (UID: \"2ccef794-fac7-4be7-94f9-d1fdc086be40\") " pod="openshift-marketplace/community-operators-l2hfw" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.339409 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ccef794-fac7-4be7-94f9-d1fdc086be40-utilities\") pod \"community-operators-l2hfw\" (UID: \"2ccef794-fac7-4be7-94f9-d1fdc086be40\") " pod="openshift-marketplace/community-operators-l2hfw" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.339651 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrl96\" (UniqueName: \"kubernetes.io/projected/3a3c2f04-ff0d-4dd0-8e15-7b544e443b85-kube-api-access-mrl96\") on node \"crc\" DevicePath \"\"" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.339735 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a3c2f04-ff0d-4dd0-8e15-7b544e443b85-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.389292 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a3c2f04-ff0d-4dd0-8e15-7b544e443b85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a3c2f04-ff0d-4dd0-8e15-7b544e443b85" (UID: "3a3c2f04-ff0d-4dd0-8e15-7b544e443b85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.443257 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ccef794-fac7-4be7-94f9-d1fdc086be40-catalog-content\") pod \"community-operators-l2hfw\" (UID: \"2ccef794-fac7-4be7-94f9-d1fdc086be40\") " pod="openshift-marketplace/community-operators-l2hfw" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.443381 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mjst\" (UniqueName: \"kubernetes.io/projected/2ccef794-fac7-4be7-94f9-d1fdc086be40-kube-api-access-8mjst\") pod \"community-operators-l2hfw\" (UID: \"2ccef794-fac7-4be7-94f9-d1fdc086be40\") " pod="openshift-marketplace/community-operators-l2hfw" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.443466 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ccef794-fac7-4be7-94f9-d1fdc086be40-utilities\") pod \"community-operators-l2hfw\" (UID: \"2ccef794-fac7-4be7-94f9-d1fdc086be40\") " pod="openshift-marketplace/community-operators-l2hfw" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.443700 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a3c2f04-ff0d-4dd0-8e15-7b544e443b85-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.443765 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ccef794-fac7-4be7-94f9-d1fdc086be40-catalog-content\") pod \"community-operators-l2hfw\" (UID: \"2ccef794-fac7-4be7-94f9-d1fdc086be40\") " pod="openshift-marketplace/community-operators-l2hfw" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.443899 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ccef794-fac7-4be7-94f9-d1fdc086be40-utilities\") pod \"community-operators-l2hfw\" (UID: \"2ccef794-fac7-4be7-94f9-d1fdc086be40\") " pod="openshift-marketplace/community-operators-l2hfw" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.461321 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mjst\" (UniqueName: \"kubernetes.io/projected/2ccef794-fac7-4be7-94f9-d1fdc086be40-kube-api-access-8mjst\") pod \"community-operators-l2hfw\" (UID: \"2ccef794-fac7-4be7-94f9-d1fdc086be40\") " pod="openshift-marketplace/community-operators-l2hfw" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.481127 4904 generic.go:334] "Generic (PLEG): container finished" podID="3a3c2f04-ff0d-4dd0-8e15-7b544e443b85" containerID="bd6b8b98f651e2ddd26dcde13325694c28f5277160963fad6aa29a99df6d952c" exitCode=0 Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.481171 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lz4f7" event={"ID":"3a3c2f04-ff0d-4dd0-8e15-7b544e443b85","Type":"ContainerDied","Data":"bd6b8b98f651e2ddd26dcde13325694c28f5277160963fad6aa29a99df6d952c"} Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.481198 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lz4f7" event={"ID":"3a3c2f04-ff0d-4dd0-8e15-7b544e443b85","Type":"ContainerDied","Data":"f71e080bca5e27838fc1afa26074328dfc93e08e05a841795ac27f4d372e068c"} Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.481217 4904 scope.go:117] "RemoveContainer" containerID="bd6b8b98f651e2ddd26dcde13325694c28f5277160963fad6aa29a99df6d952c" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.481335 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lz4f7" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.541351 4904 scope.go:117] "RemoveContainer" containerID="aa42c93b42d271e2876f9141e4accda764602323f6874fa839a3a6fed533871e" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.542397 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lz4f7"] Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.555577 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lz4f7"] Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.573834 4904 scope.go:117] "RemoveContainer" containerID="71c617bb05b11d9d0e1b73483c7a1377a5ca404db622f813fe046f770e4a2c1e" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.596317 4904 scope.go:117] "RemoveContainer" containerID="bd6b8b98f651e2ddd26dcde13325694c28f5277160963fad6aa29a99df6d952c" Feb 23 11:06:51 crc kubenswrapper[4904]: E0223 11:06:51.596757 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd6b8b98f651e2ddd26dcde13325694c28f5277160963fad6aa29a99df6d952c\": container with ID starting with bd6b8b98f651e2ddd26dcde13325694c28f5277160963fad6aa29a99df6d952c not found: ID does not exist" containerID="bd6b8b98f651e2ddd26dcde13325694c28f5277160963fad6aa29a99df6d952c" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.596851 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd6b8b98f651e2ddd26dcde13325694c28f5277160963fad6aa29a99df6d952c"} err="failed to get container status \"bd6b8b98f651e2ddd26dcde13325694c28f5277160963fad6aa29a99df6d952c\": rpc error: code = NotFound desc = could not find container \"bd6b8b98f651e2ddd26dcde13325694c28f5277160963fad6aa29a99df6d952c\": container with ID starting with bd6b8b98f651e2ddd26dcde13325694c28f5277160963fad6aa29a99df6d952c not found: ID does not exist" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.596924 4904 scope.go:117] "RemoveContainer" containerID="aa42c93b42d271e2876f9141e4accda764602323f6874fa839a3a6fed533871e" Feb 23 11:06:51 crc kubenswrapper[4904]: E0223 11:06:51.597302 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa42c93b42d271e2876f9141e4accda764602323f6874fa839a3a6fed533871e\": container with ID starting with aa42c93b42d271e2876f9141e4accda764602323f6874fa839a3a6fed533871e not found: ID does not exist" containerID="aa42c93b42d271e2876f9141e4accda764602323f6874fa839a3a6fed533871e" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.597529 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa42c93b42d271e2876f9141e4accda764602323f6874fa839a3a6fed533871e"} err="failed to get container status \"aa42c93b42d271e2876f9141e4accda764602323f6874fa839a3a6fed533871e\": rpc error: code = NotFound desc = could not find container \"aa42c93b42d271e2876f9141e4accda764602323f6874fa839a3a6fed533871e\": container with ID starting with aa42c93b42d271e2876f9141e4accda764602323f6874fa839a3a6fed533871e not found: ID does not exist" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.597558 4904 scope.go:117] "RemoveContainer" containerID="71c617bb05b11d9d0e1b73483c7a1377a5ca404db622f813fe046f770e4a2c1e" Feb 23 11:06:51 crc kubenswrapper[4904]: E0223 11:06:51.597989 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71c617bb05b11d9d0e1b73483c7a1377a5ca404db622f813fe046f770e4a2c1e\": container with ID starting with 71c617bb05b11d9d0e1b73483c7a1377a5ca404db622f813fe046f770e4a2c1e not found: ID does not exist" containerID="71c617bb05b11d9d0e1b73483c7a1377a5ca404db622f813fe046f770e4a2c1e" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.598090 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71c617bb05b11d9d0e1b73483c7a1377a5ca404db622f813fe046f770e4a2c1e"} err="failed to get container status \"71c617bb05b11d9d0e1b73483c7a1377a5ca404db622f813fe046f770e4a2c1e\": rpc error: code = NotFound desc = could not find container \"71c617bb05b11d9d0e1b73483c7a1377a5ca404db622f813fe046f770e4a2c1e\": container with ID starting with 71c617bb05b11d9d0e1b73483c7a1377a5ca404db622f813fe046f770e4a2c1e not found: ID does not exist" Feb 23 11:06:51 crc kubenswrapper[4904]: I0223 11:06:51.628553 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2hfw" Feb 23 11:06:52 crc kubenswrapper[4904]: I0223 11:06:52.158290 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l2hfw"] Feb 23 11:06:52 crc kubenswrapper[4904]: W0223 11:06:52.167175 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ccef794_fac7_4be7_94f9_d1fdc086be40.slice/crio-be59b3a65450c2ced5f9a6df6b18329e626989cff2587845c5870f375c125659 WatchSource:0}: Error finding container be59b3a65450c2ced5f9a6df6b18329e626989cff2587845c5870f375c125659: Status 404 returned error can't find the container with id be59b3a65450c2ced5f9a6df6b18329e626989cff2587845c5870f375c125659 Feb 23 11:06:52 crc kubenswrapper[4904]: I0223 11:06:52.492608 4904 generic.go:334] "Generic (PLEG): container finished" podID="2ccef794-fac7-4be7-94f9-d1fdc086be40" containerID="f104fff11f1be22d41f276ff832f60efc2a60271858f21bdd8d3edcb98123fa1" exitCode=0 Feb 23 11:06:52 crc kubenswrapper[4904]: I0223 11:06:52.492819 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2hfw" event={"ID":"2ccef794-fac7-4be7-94f9-d1fdc086be40","Type":"ContainerDied","Data":"f104fff11f1be22d41f276ff832f60efc2a60271858f21bdd8d3edcb98123fa1"} Feb 23 11:06:52 crc kubenswrapper[4904]: I0223 11:06:52.493052 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2hfw" event={"ID":"2ccef794-fac7-4be7-94f9-d1fdc086be40","Type":"ContainerStarted","Data":"be59b3a65450c2ced5f9a6df6b18329e626989cff2587845c5870f375c125659"} Feb 23 11:06:53 crc kubenswrapper[4904]: I0223 11:06:53.298696 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a3c2f04-ff0d-4dd0-8e15-7b544e443b85" path="/var/lib/kubelet/pods/3a3c2f04-ff0d-4dd0-8e15-7b544e443b85/volumes" Feb 23 11:06:54 crc kubenswrapper[4904]: I0223 11:06:54.523630 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2hfw" event={"ID":"2ccef794-fac7-4be7-94f9-d1fdc086be40","Type":"ContainerStarted","Data":"33ba0e9b67b5a05183ae1ecaf6fc427de6568f5bb92ddd8c9c6b6593806374ca"} Feb 23 11:06:55 crc kubenswrapper[4904]: I0223 11:06:55.536571 4904 generic.go:334] "Generic (PLEG): container finished" podID="2ccef794-fac7-4be7-94f9-d1fdc086be40" containerID="33ba0e9b67b5a05183ae1ecaf6fc427de6568f5bb92ddd8c9c6b6593806374ca" exitCode=0 Feb 23 11:06:55 crc kubenswrapper[4904]: I0223 11:06:55.536849 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2hfw" event={"ID":"2ccef794-fac7-4be7-94f9-d1fdc086be40","Type":"ContainerDied","Data":"33ba0e9b67b5a05183ae1ecaf6fc427de6568f5bb92ddd8c9c6b6593806374ca"} Feb 23 11:06:56 crc kubenswrapper[4904]: I0223 11:06:56.552283 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2hfw" event={"ID":"2ccef794-fac7-4be7-94f9-d1fdc086be40","Type":"ContainerStarted","Data":"f4f7ab3542cef6624545698dc4e8ba9c9f7d3cd698e655ffeadffca2b5ea4971"} Feb 23 11:06:56 crc kubenswrapper[4904]: I0223 11:06:56.577614 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l2hfw" podStartSLOduration=2.120428873 podStartE2EDuration="5.577593093s" podCreationTimestamp="2026-02-23 11:06:51 +0000 UTC" firstStartedPulling="2026-02-23 11:06:52.494969397 +0000 UTC m=+3645.915342920" lastFinishedPulling="2026-02-23 11:06:55.952133637 +0000 UTC m=+3649.372507140" observedRunningTime="2026-02-23 11:06:56.575208276 +0000 UTC m=+3649.995581789" watchObservedRunningTime="2026-02-23 11:06:56.577593093 +0000 UTC m=+3649.997966616" Feb 23 11:07:01 crc kubenswrapper[4904]: I0223 11:07:01.629304 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l2hfw" Feb 23 11:07:01 crc kubenswrapper[4904]: I0223 11:07:01.629873 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l2hfw" Feb 23 11:07:01 crc kubenswrapper[4904]: I0223 11:07:01.681608 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l2hfw" Feb 23 11:07:02 crc kubenswrapper[4904]: I0223 11:07:02.695629 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l2hfw" Feb 23 11:07:02 crc kubenswrapper[4904]: I0223 11:07:02.766172 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l2hfw"] Feb 23 11:07:04 crc kubenswrapper[4904]: I0223 11:07:04.639015 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l2hfw" podUID="2ccef794-fac7-4be7-94f9-d1fdc086be40" containerName="registry-server" containerID="cri-o://f4f7ab3542cef6624545698dc4e8ba9c9f7d3cd698e655ffeadffca2b5ea4971" gracePeriod=2 Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.185140 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2hfw" Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.285282 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ccef794-fac7-4be7-94f9-d1fdc086be40-utilities\") pod \"2ccef794-fac7-4be7-94f9-d1fdc086be40\" (UID: \"2ccef794-fac7-4be7-94f9-d1fdc086be40\") " Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.285705 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ccef794-fac7-4be7-94f9-d1fdc086be40-catalog-content\") pod \"2ccef794-fac7-4be7-94f9-d1fdc086be40\" (UID: \"2ccef794-fac7-4be7-94f9-d1fdc086be40\") " Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.285894 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mjst\" (UniqueName: \"kubernetes.io/projected/2ccef794-fac7-4be7-94f9-d1fdc086be40-kube-api-access-8mjst\") pod \"2ccef794-fac7-4be7-94f9-d1fdc086be40\" (UID: \"2ccef794-fac7-4be7-94f9-d1fdc086be40\") " Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.286406 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ccef794-fac7-4be7-94f9-d1fdc086be40-utilities" (OuterVolumeSpecName: "utilities") pod "2ccef794-fac7-4be7-94f9-d1fdc086be40" (UID: "2ccef794-fac7-4be7-94f9-d1fdc086be40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.293449 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ccef794-fac7-4be7-94f9-d1fdc086be40-kube-api-access-8mjst" (OuterVolumeSpecName: "kube-api-access-8mjst") pod "2ccef794-fac7-4be7-94f9-d1fdc086be40" (UID: "2ccef794-fac7-4be7-94f9-d1fdc086be40"). InnerVolumeSpecName "kube-api-access-8mjst". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.350626 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ccef794-fac7-4be7-94f9-d1fdc086be40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ccef794-fac7-4be7-94f9-d1fdc086be40" (UID: "2ccef794-fac7-4be7-94f9-d1fdc086be40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.388612 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ccef794-fac7-4be7-94f9-d1fdc086be40-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.388642 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mjst\" (UniqueName: \"kubernetes.io/projected/2ccef794-fac7-4be7-94f9-d1fdc086be40-kube-api-access-8mjst\") on node \"crc\" DevicePath \"\"" Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.388653 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ccef794-fac7-4be7-94f9-d1fdc086be40-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.654112 4904 generic.go:334] "Generic (PLEG): container finished" podID="2ccef794-fac7-4be7-94f9-d1fdc086be40" containerID="f4f7ab3542cef6624545698dc4e8ba9c9f7d3cd698e655ffeadffca2b5ea4971" exitCode=0 Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.654160 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2hfw" event={"ID":"2ccef794-fac7-4be7-94f9-d1fdc086be40","Type":"ContainerDied","Data":"f4f7ab3542cef6624545698dc4e8ba9c9f7d3cd698e655ffeadffca2b5ea4971"} Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.654193 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2hfw" event={"ID":"2ccef794-fac7-4be7-94f9-d1fdc086be40","Type":"ContainerDied","Data":"be59b3a65450c2ced5f9a6df6b18329e626989cff2587845c5870f375c125659"} Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.654211 4904 scope.go:117] "RemoveContainer" containerID="f4f7ab3542cef6624545698dc4e8ba9c9f7d3cd698e655ffeadffca2b5ea4971" Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.654243 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2hfw" Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.695701 4904 scope.go:117] "RemoveContainer" containerID="33ba0e9b67b5a05183ae1ecaf6fc427de6568f5bb92ddd8c9c6b6593806374ca" Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.719604 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l2hfw"] Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.732160 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l2hfw"] Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.737238 4904 scope.go:117] "RemoveContainer" containerID="f104fff11f1be22d41f276ff832f60efc2a60271858f21bdd8d3edcb98123fa1" Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.771809 4904 scope.go:117] "RemoveContainer" containerID="f4f7ab3542cef6624545698dc4e8ba9c9f7d3cd698e655ffeadffca2b5ea4971" Feb 23 11:07:05 crc kubenswrapper[4904]: E0223 11:07:05.772429 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4f7ab3542cef6624545698dc4e8ba9c9f7d3cd698e655ffeadffca2b5ea4971\": container with ID starting with f4f7ab3542cef6624545698dc4e8ba9c9f7d3cd698e655ffeadffca2b5ea4971 not found: ID does not exist" containerID="f4f7ab3542cef6624545698dc4e8ba9c9f7d3cd698e655ffeadffca2b5ea4971" Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.772468 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4f7ab3542cef6624545698dc4e8ba9c9f7d3cd698e655ffeadffca2b5ea4971"} err="failed to get container status \"f4f7ab3542cef6624545698dc4e8ba9c9f7d3cd698e655ffeadffca2b5ea4971\": rpc error: code = NotFound desc = could not find container \"f4f7ab3542cef6624545698dc4e8ba9c9f7d3cd698e655ffeadffca2b5ea4971\": container with ID starting with f4f7ab3542cef6624545698dc4e8ba9c9f7d3cd698e655ffeadffca2b5ea4971 not found: ID does not exist" Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.772492 4904 scope.go:117] "RemoveContainer" containerID="33ba0e9b67b5a05183ae1ecaf6fc427de6568f5bb92ddd8c9c6b6593806374ca" Feb 23 11:07:05 crc kubenswrapper[4904]: E0223 11:07:05.773082 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33ba0e9b67b5a05183ae1ecaf6fc427de6568f5bb92ddd8c9c6b6593806374ca\": container with ID starting with 33ba0e9b67b5a05183ae1ecaf6fc427de6568f5bb92ddd8c9c6b6593806374ca not found: ID does not exist" containerID="33ba0e9b67b5a05183ae1ecaf6fc427de6568f5bb92ddd8c9c6b6593806374ca" Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.773118 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33ba0e9b67b5a05183ae1ecaf6fc427de6568f5bb92ddd8c9c6b6593806374ca"} err="failed to get container status \"33ba0e9b67b5a05183ae1ecaf6fc427de6568f5bb92ddd8c9c6b6593806374ca\": rpc error: code = NotFound desc = could not find container \"33ba0e9b67b5a05183ae1ecaf6fc427de6568f5bb92ddd8c9c6b6593806374ca\": container with ID starting with 33ba0e9b67b5a05183ae1ecaf6fc427de6568f5bb92ddd8c9c6b6593806374ca not found: ID does not exist" Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.773140 4904 scope.go:117] "RemoveContainer" containerID="f104fff11f1be22d41f276ff832f60efc2a60271858f21bdd8d3edcb98123fa1" Feb 23 11:07:05 crc kubenswrapper[4904]: E0223 11:07:05.773601 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f104fff11f1be22d41f276ff832f60efc2a60271858f21bdd8d3edcb98123fa1\": container with ID starting with f104fff11f1be22d41f276ff832f60efc2a60271858f21bdd8d3edcb98123fa1 not found: ID does not exist" containerID="f104fff11f1be22d41f276ff832f60efc2a60271858f21bdd8d3edcb98123fa1" Feb 23 11:07:05 crc kubenswrapper[4904]: I0223 11:07:05.773634 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f104fff11f1be22d41f276ff832f60efc2a60271858f21bdd8d3edcb98123fa1"} err="failed to get container status \"f104fff11f1be22d41f276ff832f60efc2a60271858f21bdd8d3edcb98123fa1\": rpc error: code = NotFound desc = could not find container \"f104fff11f1be22d41f276ff832f60efc2a60271858f21bdd8d3edcb98123fa1\": container with ID starting with f104fff11f1be22d41f276ff832f60efc2a60271858f21bdd8d3edcb98123fa1 not found: ID does not exist" Feb 23 11:07:07 crc kubenswrapper[4904]: I0223 11:07:07.276983 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ccef794-fac7-4be7-94f9-d1fdc086be40" path="/var/lib/kubelet/pods/2ccef794-fac7-4be7-94f9-d1fdc086be40/volumes" Feb 23 11:07:17 crc kubenswrapper[4904]: I0223 11:07:17.398778 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 11:07:17 crc kubenswrapper[4904]: I0223 11:07:17.399521 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 11:07:17 crc kubenswrapper[4904]: I0223 11:07:17.399580 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 11:07:17 crc kubenswrapper[4904]: I0223 11:07:17.400482 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23"} pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 11:07:17 crc kubenswrapper[4904]: I0223 11:07:17.400556 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" containerID="cri-o://5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" gracePeriod=600 Feb 23 11:07:17 crc kubenswrapper[4904]: E0223 11:07:17.547065 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:07:17 crc kubenswrapper[4904]: I0223 11:07:17.794806 4904 generic.go:334] "Generic (PLEG): container finished" podID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" exitCode=0 Feb 23 11:07:17 crc kubenswrapper[4904]: I0223 11:07:17.794876 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerDied","Data":"5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23"} Feb 23 11:07:17 crc kubenswrapper[4904]: I0223 11:07:17.794924 4904 scope.go:117] "RemoveContainer" containerID="42925d3ee1d179ea60e0f55f2416554c8d91a37a7a505bfbf7feb9da06d8c0c0" Feb 23 11:07:17 crc kubenswrapper[4904]: I0223 11:07:17.796511 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:07:17 crc kubenswrapper[4904]: E0223 11:07:17.797315 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:07:29 crc kubenswrapper[4904]: I0223 11:07:29.255809 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:07:29 crc kubenswrapper[4904]: E0223 11:07:29.258946 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:07:43 crc kubenswrapper[4904]: I0223 11:07:43.257231 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:07:43 crc kubenswrapper[4904]: E0223 11:07:43.258254 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:07:54 crc kubenswrapper[4904]: I0223 11:07:54.255324 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:07:54 crc kubenswrapper[4904]: E0223 11:07:54.256587 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:08:06 crc kubenswrapper[4904]: I0223 11:08:06.256527 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:08:06 crc kubenswrapper[4904]: E0223 11:08:06.257955 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:08:20 crc kubenswrapper[4904]: I0223 11:08:20.255929 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:08:20 crc kubenswrapper[4904]: E0223 11:08:20.256862 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:08:32 crc kubenswrapper[4904]: I0223 11:08:32.256176 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:08:32 crc kubenswrapper[4904]: E0223 11:08:32.257165 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:08:43 crc kubenswrapper[4904]: I0223 11:08:43.256084 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:08:43 crc kubenswrapper[4904]: E0223 11:08:43.257393 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:08:57 crc kubenswrapper[4904]: I0223 11:08:57.269190 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:08:57 crc kubenswrapper[4904]: E0223 11:08:57.270454 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:09:12 crc kubenswrapper[4904]: I0223 11:09:12.256462 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:09:12 crc kubenswrapper[4904]: E0223 11:09:12.257699 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:09:24 crc kubenswrapper[4904]: I0223 11:09:24.256368 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:09:24 crc kubenswrapper[4904]: E0223 11:09:24.257151 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:09:37 crc kubenswrapper[4904]: I0223 11:09:37.269702 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:09:37 crc kubenswrapper[4904]: E0223 11:09:37.271155 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:09:48 crc kubenswrapper[4904]: I0223 11:09:48.256256 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:09:48 crc kubenswrapper[4904]: E0223 11:09:48.258657 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:09:59 crc kubenswrapper[4904]: I0223 11:09:59.260153 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:09:59 crc kubenswrapper[4904]: E0223 11:09:59.262969 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:10:12 crc kubenswrapper[4904]: I0223 11:10:12.256197 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:10:12 crc kubenswrapper[4904]: E0223 11:10:12.257588 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:10:26 crc kubenswrapper[4904]: I0223 11:10:26.256510 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:10:26 crc kubenswrapper[4904]: E0223 11:10:26.257457 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:10:32 crc kubenswrapper[4904]: I0223 11:10:32.609366 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z8452"] Feb 23 11:10:32 crc kubenswrapper[4904]: E0223 11:10:32.611078 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ccef794-fac7-4be7-94f9-d1fdc086be40" containerName="extract-content" Feb 23 11:10:32 crc kubenswrapper[4904]: I0223 11:10:32.611113 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ccef794-fac7-4be7-94f9-d1fdc086be40" containerName="extract-content" Feb 23 11:10:32 crc kubenswrapper[4904]: E0223 11:10:32.611150 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ccef794-fac7-4be7-94f9-d1fdc086be40" containerName="registry-server" Feb 23 11:10:32 crc kubenswrapper[4904]: I0223 11:10:32.611165 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ccef794-fac7-4be7-94f9-d1fdc086be40" containerName="registry-server" Feb 23 11:10:32 crc kubenswrapper[4904]: E0223 11:10:32.611248 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ccef794-fac7-4be7-94f9-d1fdc086be40" containerName="extract-utilities" Feb 23 11:10:32 crc kubenswrapper[4904]: I0223 11:10:32.611266 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ccef794-fac7-4be7-94f9-d1fdc086be40" containerName="extract-utilities" Feb 23 11:10:32 crc kubenswrapper[4904]: I0223 11:10:32.611686 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ccef794-fac7-4be7-94f9-d1fdc086be40" containerName="registry-server" Feb 23 11:10:32 crc kubenswrapper[4904]: I0223 11:10:32.615328 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z8452" Feb 23 11:10:32 crc kubenswrapper[4904]: I0223 11:10:32.651413 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z8452"] Feb 23 11:10:32 crc kubenswrapper[4904]: I0223 11:10:32.695973 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09fc936a-8b4f-4b29-84ef-4e505e033052-utilities\") pod \"redhat-marketplace-z8452\" (UID: \"09fc936a-8b4f-4b29-84ef-4e505e033052\") " pod="openshift-marketplace/redhat-marketplace-z8452" Feb 23 11:10:32 crc kubenswrapper[4904]: I0223 11:10:32.696269 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09fc936a-8b4f-4b29-84ef-4e505e033052-catalog-content\") pod \"redhat-marketplace-z8452\" (UID: \"09fc936a-8b4f-4b29-84ef-4e505e033052\") " pod="openshift-marketplace/redhat-marketplace-z8452" Feb 23 11:10:32 crc kubenswrapper[4904]: I0223 11:10:32.696357 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sttcj\" (UniqueName: \"kubernetes.io/projected/09fc936a-8b4f-4b29-84ef-4e505e033052-kube-api-access-sttcj\") pod \"redhat-marketplace-z8452\" (UID: \"09fc936a-8b4f-4b29-84ef-4e505e033052\") " pod="openshift-marketplace/redhat-marketplace-z8452" Feb 23 11:10:32 crc kubenswrapper[4904]: I0223 11:10:32.798638 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09fc936a-8b4f-4b29-84ef-4e505e033052-utilities\") pod \"redhat-marketplace-z8452\" (UID: \"09fc936a-8b4f-4b29-84ef-4e505e033052\") " pod="openshift-marketplace/redhat-marketplace-z8452" Feb 23 11:10:32 crc kubenswrapper[4904]: I0223 11:10:32.798831 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09fc936a-8b4f-4b29-84ef-4e505e033052-catalog-content\") pod \"redhat-marketplace-z8452\" (UID: \"09fc936a-8b4f-4b29-84ef-4e505e033052\") " pod="openshift-marketplace/redhat-marketplace-z8452" Feb 23 11:10:32 crc kubenswrapper[4904]: I0223 11:10:32.798874 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sttcj\" (UniqueName: \"kubernetes.io/projected/09fc936a-8b4f-4b29-84ef-4e505e033052-kube-api-access-sttcj\") pod \"redhat-marketplace-z8452\" (UID: \"09fc936a-8b4f-4b29-84ef-4e505e033052\") " pod="openshift-marketplace/redhat-marketplace-z8452" Feb 23 11:10:32 crc kubenswrapper[4904]: I0223 11:10:32.799342 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09fc936a-8b4f-4b29-84ef-4e505e033052-utilities\") pod \"redhat-marketplace-z8452\" (UID: \"09fc936a-8b4f-4b29-84ef-4e505e033052\") " pod="openshift-marketplace/redhat-marketplace-z8452" Feb 23 11:10:32 crc kubenswrapper[4904]: I0223 11:10:32.799576 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09fc936a-8b4f-4b29-84ef-4e505e033052-catalog-content\") pod \"redhat-marketplace-z8452\" (UID: \"09fc936a-8b4f-4b29-84ef-4e505e033052\") " pod="openshift-marketplace/redhat-marketplace-z8452" Feb 23 11:10:32 crc kubenswrapper[4904]: I0223 11:10:32.827096 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sttcj\" (UniqueName: \"kubernetes.io/projected/09fc936a-8b4f-4b29-84ef-4e505e033052-kube-api-access-sttcj\") pod \"redhat-marketplace-z8452\" (UID: \"09fc936a-8b4f-4b29-84ef-4e505e033052\") " pod="openshift-marketplace/redhat-marketplace-z8452" Feb 23 11:10:32 crc kubenswrapper[4904]: I0223 11:10:32.949810 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z8452" Feb 23 11:10:33 crc kubenswrapper[4904]: I0223 11:10:33.424058 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z8452"] Feb 23 11:10:34 crc kubenswrapper[4904]: I0223 11:10:34.191645 4904 generic.go:334] "Generic (PLEG): container finished" podID="09fc936a-8b4f-4b29-84ef-4e505e033052" containerID="0286981465f03295f1f15918b8931b776d8e1b2daf080e03e6c622e639501737" exitCode=0 Feb 23 11:10:34 crc kubenswrapper[4904]: I0223 11:10:34.191773 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8452" event={"ID":"09fc936a-8b4f-4b29-84ef-4e505e033052","Type":"ContainerDied","Data":"0286981465f03295f1f15918b8931b776d8e1b2daf080e03e6c622e639501737"} Feb 23 11:10:34 crc kubenswrapper[4904]: I0223 11:10:34.192180 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8452" event={"ID":"09fc936a-8b4f-4b29-84ef-4e505e033052","Type":"ContainerStarted","Data":"862f9ebbfd5f72bf244ad45693ea2d1b702c0b3e740719872d13477947ddae83"} Feb 23 11:10:34 crc kubenswrapper[4904]: I0223 11:10:34.194098 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 11:10:35 crc kubenswrapper[4904]: I0223 11:10:35.208145 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8452" event={"ID":"09fc936a-8b4f-4b29-84ef-4e505e033052","Type":"ContainerStarted","Data":"ef5d1f14bf9643fa30c3522f319fb4721e0636d5b02d3981c516c7823305e531"} Feb 23 11:10:36 crc kubenswrapper[4904]: I0223 11:10:36.218638 4904 generic.go:334] "Generic (PLEG): container finished" podID="09fc936a-8b4f-4b29-84ef-4e505e033052" containerID="ef5d1f14bf9643fa30c3522f319fb4721e0636d5b02d3981c516c7823305e531" exitCode=0 Feb 23 11:10:36 crc kubenswrapper[4904]: I0223 11:10:36.218755 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8452" event={"ID":"09fc936a-8b4f-4b29-84ef-4e505e033052","Type":"ContainerDied","Data":"ef5d1f14bf9643fa30c3522f319fb4721e0636d5b02d3981c516c7823305e531"} Feb 23 11:10:37 crc kubenswrapper[4904]: I0223 11:10:37.241232 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8452" event={"ID":"09fc936a-8b4f-4b29-84ef-4e505e033052","Type":"ContainerStarted","Data":"d67eb2409ac2f65ab0a278026a74a74838f5285b59f1d64169c0bc275079df3f"} Feb 23 11:10:37 crc kubenswrapper[4904]: I0223 11:10:37.287258 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z8452" podStartSLOduration=2.8773182139999998 podStartE2EDuration="5.287231585s" podCreationTimestamp="2026-02-23 11:10:32 +0000 UTC" firstStartedPulling="2026-02-23 11:10:34.193794875 +0000 UTC m=+3867.614168388" lastFinishedPulling="2026-02-23 11:10:36.603708226 +0000 UTC m=+3870.024081759" observedRunningTime="2026-02-23 11:10:37.271094946 +0000 UTC m=+3870.691468539" watchObservedRunningTime="2026-02-23 11:10:37.287231585 +0000 UTC m=+3870.707605108" Feb 23 11:10:41 crc kubenswrapper[4904]: I0223 11:10:41.255637 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:10:41 crc kubenswrapper[4904]: E0223 11:10:41.256422 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:10:42 crc kubenswrapper[4904]: I0223 11:10:42.949988 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z8452" Feb 23 11:10:42 crc kubenswrapper[4904]: I0223 11:10:42.951282 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z8452" Feb 23 11:10:43 crc kubenswrapper[4904]: I0223 11:10:43.035511 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z8452" Feb 23 11:10:43 crc kubenswrapper[4904]: I0223 11:10:43.349316 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z8452" Feb 23 11:10:43 crc kubenswrapper[4904]: I0223 11:10:43.404389 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z8452"] Feb 23 11:10:45 crc kubenswrapper[4904]: I0223 11:10:45.319328 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z8452" podUID="09fc936a-8b4f-4b29-84ef-4e505e033052" containerName="registry-server" containerID="cri-o://d67eb2409ac2f65ab0a278026a74a74838f5285b59f1d64169c0bc275079df3f" gracePeriod=2 Feb 23 11:10:45 crc kubenswrapper[4904]: I0223 11:10:45.990676 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z8452" Feb 23 11:10:46 crc kubenswrapper[4904]: I0223 11:10:46.080318 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09fc936a-8b4f-4b29-84ef-4e505e033052-catalog-content\") pod \"09fc936a-8b4f-4b29-84ef-4e505e033052\" (UID: \"09fc936a-8b4f-4b29-84ef-4e505e033052\") " Feb 23 11:10:46 crc kubenswrapper[4904]: I0223 11:10:46.080464 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sttcj\" (UniqueName: \"kubernetes.io/projected/09fc936a-8b4f-4b29-84ef-4e505e033052-kube-api-access-sttcj\") pod \"09fc936a-8b4f-4b29-84ef-4e505e033052\" (UID: \"09fc936a-8b4f-4b29-84ef-4e505e033052\") " Feb 23 11:10:46 crc kubenswrapper[4904]: I0223 11:10:46.080596 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09fc936a-8b4f-4b29-84ef-4e505e033052-utilities\") pod \"09fc936a-8b4f-4b29-84ef-4e505e033052\" (UID: \"09fc936a-8b4f-4b29-84ef-4e505e033052\") " Feb 23 11:10:46 crc kubenswrapper[4904]: I0223 11:10:46.081988 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09fc936a-8b4f-4b29-84ef-4e505e033052-utilities" (OuterVolumeSpecName: "utilities") pod "09fc936a-8b4f-4b29-84ef-4e505e033052" (UID: "09fc936a-8b4f-4b29-84ef-4e505e033052"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:10:46 crc kubenswrapper[4904]: I0223 11:10:46.100919 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09fc936a-8b4f-4b29-84ef-4e505e033052-kube-api-access-sttcj" (OuterVolumeSpecName: "kube-api-access-sttcj") pod "09fc936a-8b4f-4b29-84ef-4e505e033052" (UID: "09fc936a-8b4f-4b29-84ef-4e505e033052"). InnerVolumeSpecName "kube-api-access-sttcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 11:10:46 crc kubenswrapper[4904]: I0223 11:10:46.107452 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09fc936a-8b4f-4b29-84ef-4e505e033052-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09fc936a-8b4f-4b29-84ef-4e505e033052" (UID: "09fc936a-8b4f-4b29-84ef-4e505e033052"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:10:46 crc kubenswrapper[4904]: I0223 11:10:46.182586 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sttcj\" (UniqueName: \"kubernetes.io/projected/09fc936a-8b4f-4b29-84ef-4e505e033052-kube-api-access-sttcj\") on node \"crc\" DevicePath \"\"" Feb 23 11:10:46 crc kubenswrapper[4904]: I0223 11:10:46.182618 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09fc936a-8b4f-4b29-84ef-4e505e033052-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 11:10:46 crc kubenswrapper[4904]: I0223 11:10:46.182639 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09fc936a-8b4f-4b29-84ef-4e505e033052-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 11:10:46 crc kubenswrapper[4904]: I0223 11:10:46.332034 4904 generic.go:334] "Generic (PLEG): container finished" podID="09fc936a-8b4f-4b29-84ef-4e505e033052" containerID="d67eb2409ac2f65ab0a278026a74a74838f5285b59f1d64169c0bc275079df3f" exitCode=0 Feb 23 11:10:46 crc kubenswrapper[4904]: I0223 11:10:46.332088 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8452" event={"ID":"09fc936a-8b4f-4b29-84ef-4e505e033052","Type":"ContainerDied","Data":"d67eb2409ac2f65ab0a278026a74a74838f5285b59f1d64169c0bc275079df3f"} Feb 23 11:10:46 crc kubenswrapper[4904]: I0223 11:10:46.332124 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8452" event={"ID":"09fc936a-8b4f-4b29-84ef-4e505e033052","Type":"ContainerDied","Data":"862f9ebbfd5f72bf244ad45693ea2d1b702c0b3e740719872d13477947ddae83"} Feb 23 11:10:46 crc kubenswrapper[4904]: I0223 11:10:46.332147 4904 scope.go:117] "RemoveContainer" containerID="d67eb2409ac2f65ab0a278026a74a74838f5285b59f1d64169c0bc275079df3f" Feb 23 11:10:46 crc kubenswrapper[4904]: I0223 11:10:46.332153 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z8452" Feb 23 11:10:46 crc kubenswrapper[4904]: I0223 11:10:46.363155 4904 scope.go:117] "RemoveContainer" containerID="ef5d1f14bf9643fa30c3522f319fb4721e0636d5b02d3981c516c7823305e531" Feb 23 11:10:46 crc kubenswrapper[4904]: I0223 11:10:46.365744 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z8452"] Feb 23 11:10:46 crc kubenswrapper[4904]: I0223 11:10:46.375437 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z8452"] Feb 23 11:10:46 crc kubenswrapper[4904]: I0223 11:10:46.385175 4904 scope.go:117] "RemoveContainer" containerID="0286981465f03295f1f15918b8931b776d8e1b2daf080e03e6c622e639501737" Feb 23 11:10:46 crc kubenswrapper[4904]: I0223 11:10:46.428685 4904 scope.go:117] "RemoveContainer" containerID="d67eb2409ac2f65ab0a278026a74a74838f5285b59f1d64169c0bc275079df3f" Feb 23 11:10:46 crc kubenswrapper[4904]: E0223 11:10:46.429113 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d67eb2409ac2f65ab0a278026a74a74838f5285b59f1d64169c0bc275079df3f\": container with ID starting with d67eb2409ac2f65ab0a278026a74a74838f5285b59f1d64169c0bc275079df3f not found: ID does not exist" containerID="d67eb2409ac2f65ab0a278026a74a74838f5285b59f1d64169c0bc275079df3f" Feb 23 11:10:46 crc kubenswrapper[4904]: I0223 11:10:46.429143 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d67eb2409ac2f65ab0a278026a74a74838f5285b59f1d64169c0bc275079df3f"} err="failed to get container status \"d67eb2409ac2f65ab0a278026a74a74838f5285b59f1d64169c0bc275079df3f\": rpc error: code = NotFound desc = could not find container \"d67eb2409ac2f65ab0a278026a74a74838f5285b59f1d64169c0bc275079df3f\": container with ID starting with d67eb2409ac2f65ab0a278026a74a74838f5285b59f1d64169c0bc275079df3f not found: ID does not exist" Feb 23 11:10:46 crc kubenswrapper[4904]: I0223 11:10:46.429163 4904 scope.go:117] "RemoveContainer" containerID="ef5d1f14bf9643fa30c3522f319fb4721e0636d5b02d3981c516c7823305e531" Feb 23 11:10:46 crc kubenswrapper[4904]: E0223 11:10:46.429502 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef5d1f14bf9643fa30c3522f319fb4721e0636d5b02d3981c516c7823305e531\": container with ID starting with ef5d1f14bf9643fa30c3522f319fb4721e0636d5b02d3981c516c7823305e531 not found: ID does not exist" containerID="ef5d1f14bf9643fa30c3522f319fb4721e0636d5b02d3981c516c7823305e531" Feb 23 11:10:46 crc kubenswrapper[4904]: I0223 11:10:46.429523 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef5d1f14bf9643fa30c3522f319fb4721e0636d5b02d3981c516c7823305e531"} err="failed to get container status \"ef5d1f14bf9643fa30c3522f319fb4721e0636d5b02d3981c516c7823305e531\": rpc error: code = NotFound desc = could not find container \"ef5d1f14bf9643fa30c3522f319fb4721e0636d5b02d3981c516c7823305e531\": container with ID starting with ef5d1f14bf9643fa30c3522f319fb4721e0636d5b02d3981c516c7823305e531 not found: ID does not exist" Feb 23 11:10:46 crc kubenswrapper[4904]: I0223 11:10:46.429539 4904 scope.go:117] "RemoveContainer" containerID="0286981465f03295f1f15918b8931b776d8e1b2daf080e03e6c622e639501737" Feb 23 11:10:46 crc kubenswrapper[4904]: E0223 11:10:46.429828 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0286981465f03295f1f15918b8931b776d8e1b2daf080e03e6c622e639501737\": container with ID starting with 0286981465f03295f1f15918b8931b776d8e1b2daf080e03e6c622e639501737 not found: ID does not exist" containerID="0286981465f03295f1f15918b8931b776d8e1b2daf080e03e6c622e639501737" Feb 23 11:10:46 crc kubenswrapper[4904]: I0223 11:10:46.429874 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0286981465f03295f1f15918b8931b776d8e1b2daf080e03e6c622e639501737"} err="failed to get container status \"0286981465f03295f1f15918b8931b776d8e1b2daf080e03e6c622e639501737\": rpc error: code = NotFound desc = could not find container \"0286981465f03295f1f15918b8931b776d8e1b2daf080e03e6c622e639501737\": container with ID starting with 0286981465f03295f1f15918b8931b776d8e1b2daf080e03e6c622e639501737 not found: ID does not exist" Feb 23 11:10:47 crc kubenswrapper[4904]: I0223 11:10:47.281853 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09fc936a-8b4f-4b29-84ef-4e505e033052" path="/var/lib/kubelet/pods/09fc936a-8b4f-4b29-84ef-4e505e033052/volumes" Feb 23 11:10:54 crc kubenswrapper[4904]: I0223 11:10:54.255366 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:10:54 crc kubenswrapper[4904]: E0223 11:10:54.256632 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:11:05 crc kubenswrapper[4904]: I0223 11:11:05.255996 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:11:05 crc kubenswrapper[4904]: E0223 11:11:05.256613 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:11:16 crc kubenswrapper[4904]: I0223 11:11:16.256066 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:11:16 crc kubenswrapper[4904]: E0223 11:11:16.257156 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:11:30 crc kubenswrapper[4904]: I0223 11:11:30.255904 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:11:30 crc kubenswrapper[4904]: E0223 11:11:30.256559 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:11:43 crc kubenswrapper[4904]: I0223 11:11:43.256534 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:11:43 crc kubenswrapper[4904]: E0223 11:11:43.257739 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:11:55 crc kubenswrapper[4904]: I0223 11:11:55.256773 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:11:55 crc kubenswrapper[4904]: E0223 11:11:55.257594 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:12:07 crc kubenswrapper[4904]: I0223 11:12:07.270321 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:12:07 crc kubenswrapper[4904]: E0223 11:12:07.271926 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:12:22 crc kubenswrapper[4904]: I0223 11:12:22.255559 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:12:23 crc kubenswrapper[4904]: I0223 11:12:23.431867 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerStarted","Data":"98694295b4fea5a54f7ffa6e8e5ab7cbbafee04fc0e285681141bc61a4996c46"} Feb 23 11:14:47 crc kubenswrapper[4904]: I0223 11:14:47.397841 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 11:14:47 crc kubenswrapper[4904]: I0223 11:14:47.398369 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 11:15:00 crc kubenswrapper[4904]: I0223 11:15:00.194099 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530755-j6pw2"] Feb 23 11:15:00 crc kubenswrapper[4904]: E0223 11:15:00.194997 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09fc936a-8b4f-4b29-84ef-4e505e033052" containerName="registry-server" Feb 23 11:15:00 crc kubenswrapper[4904]: I0223 11:15:00.195011 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="09fc936a-8b4f-4b29-84ef-4e505e033052" containerName="registry-server" Feb 23 11:15:00 crc kubenswrapper[4904]: E0223 11:15:00.195022 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09fc936a-8b4f-4b29-84ef-4e505e033052" containerName="extract-content" Feb 23 11:15:00 crc kubenswrapper[4904]: I0223 11:15:00.195030 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="09fc936a-8b4f-4b29-84ef-4e505e033052" containerName="extract-content" Feb 23 11:15:00 crc kubenswrapper[4904]: E0223 11:15:00.195056 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09fc936a-8b4f-4b29-84ef-4e505e033052" containerName="extract-utilities" Feb 23 11:15:00 crc kubenswrapper[4904]: I0223 11:15:00.195065 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="09fc936a-8b4f-4b29-84ef-4e505e033052" containerName="extract-utilities" Feb 23 11:15:00 crc kubenswrapper[4904]: I0223 11:15:00.195436 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="09fc936a-8b4f-4b29-84ef-4e505e033052" containerName="registry-server" Feb 23 11:15:00 crc kubenswrapper[4904]: I0223 11:15:00.196253 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530755-j6pw2" Feb 23 11:15:00 crc kubenswrapper[4904]: I0223 11:15:00.199125 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 11:15:00 crc kubenswrapper[4904]: I0223 11:15:00.200103 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 11:15:00 crc kubenswrapper[4904]: I0223 11:15:00.205639 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530755-j6pw2"] Feb 23 11:15:00 crc kubenswrapper[4904]: I0223 11:15:00.313182 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7fa5b2f-168c-4b9d-90c1-e60aa040930e-secret-volume\") pod \"collect-profiles-29530755-j6pw2\" (UID: \"c7fa5b2f-168c-4b9d-90c1-e60aa040930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530755-j6pw2" Feb 23 11:15:00 crc kubenswrapper[4904]: I0223 11:15:00.313482 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7fa5b2f-168c-4b9d-90c1-e60aa040930e-config-volume\") pod \"collect-profiles-29530755-j6pw2\" (UID: \"c7fa5b2f-168c-4b9d-90c1-e60aa040930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530755-j6pw2" Feb 23 11:15:00 crc kubenswrapper[4904]: I0223 11:15:00.313602 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbf8k\" (UniqueName: \"kubernetes.io/projected/c7fa5b2f-168c-4b9d-90c1-e60aa040930e-kube-api-access-qbf8k\") pod \"collect-profiles-29530755-j6pw2\" (UID: \"c7fa5b2f-168c-4b9d-90c1-e60aa040930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530755-j6pw2" Feb 23 11:15:00 crc kubenswrapper[4904]: I0223 11:15:00.415745 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbf8k\" (UniqueName: \"kubernetes.io/projected/c7fa5b2f-168c-4b9d-90c1-e60aa040930e-kube-api-access-qbf8k\") pod \"collect-profiles-29530755-j6pw2\" (UID: \"c7fa5b2f-168c-4b9d-90c1-e60aa040930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530755-j6pw2" Feb 23 11:15:00 crc kubenswrapper[4904]: I0223 11:15:00.415927 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7fa5b2f-168c-4b9d-90c1-e60aa040930e-secret-volume\") pod \"collect-profiles-29530755-j6pw2\" (UID: \"c7fa5b2f-168c-4b9d-90c1-e60aa040930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530755-j6pw2" Feb 23 11:15:00 crc kubenswrapper[4904]: I0223 11:15:00.415989 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7fa5b2f-168c-4b9d-90c1-e60aa040930e-config-volume\") pod \"collect-profiles-29530755-j6pw2\" (UID: \"c7fa5b2f-168c-4b9d-90c1-e60aa040930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530755-j6pw2" Feb 23 11:15:00 crc kubenswrapper[4904]: I0223 11:15:00.416847 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7fa5b2f-168c-4b9d-90c1-e60aa040930e-config-volume\") pod \"collect-profiles-29530755-j6pw2\" (UID: \"c7fa5b2f-168c-4b9d-90c1-e60aa040930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530755-j6pw2" Feb 23 11:15:00 crc kubenswrapper[4904]: I0223 11:15:00.430361 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7fa5b2f-168c-4b9d-90c1-e60aa040930e-secret-volume\") pod \"collect-profiles-29530755-j6pw2\" (UID: \"c7fa5b2f-168c-4b9d-90c1-e60aa040930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530755-j6pw2" Feb 23 11:15:00 crc kubenswrapper[4904]: I0223 11:15:00.432079 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbf8k\" (UniqueName: \"kubernetes.io/projected/c7fa5b2f-168c-4b9d-90c1-e60aa040930e-kube-api-access-qbf8k\") pod \"collect-profiles-29530755-j6pw2\" (UID: \"c7fa5b2f-168c-4b9d-90c1-e60aa040930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530755-j6pw2" Feb 23 11:15:00 crc kubenswrapper[4904]: I0223 11:15:00.543509 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530755-j6pw2" Feb 23 11:15:02 crc kubenswrapper[4904]: I0223 11:15:02.427465 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530755-j6pw2"] Feb 23 11:15:03 crc kubenswrapper[4904]: I0223 11:15:03.424342 4904 generic.go:334] "Generic (PLEG): container finished" podID="c7fa5b2f-168c-4b9d-90c1-e60aa040930e" containerID="2cefd1c203c09e8c5c8fbe974e43b468cb5ea5ae2bb94bcc4503b1b9265d07ce" exitCode=0 Feb 23 11:15:03 crc kubenswrapper[4904]: I0223 11:15:03.424914 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530755-j6pw2" event={"ID":"c7fa5b2f-168c-4b9d-90c1-e60aa040930e","Type":"ContainerDied","Data":"2cefd1c203c09e8c5c8fbe974e43b468cb5ea5ae2bb94bcc4503b1b9265d07ce"} Feb 23 11:15:03 crc kubenswrapper[4904]: I0223 11:15:03.424978 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530755-j6pw2" event={"ID":"c7fa5b2f-168c-4b9d-90c1-e60aa040930e","Type":"ContainerStarted","Data":"f48f5389304ebfa5d3656bea369d80000708fa7ee57b5c85e53c81c21d1a392d"} Feb 23 11:15:04 crc kubenswrapper[4904]: I0223 11:15:04.892090 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530755-j6pw2" Feb 23 11:15:05 crc kubenswrapper[4904]: I0223 11:15:05.091529 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbf8k\" (UniqueName: \"kubernetes.io/projected/c7fa5b2f-168c-4b9d-90c1-e60aa040930e-kube-api-access-qbf8k\") pod \"c7fa5b2f-168c-4b9d-90c1-e60aa040930e\" (UID: \"c7fa5b2f-168c-4b9d-90c1-e60aa040930e\") " Feb 23 11:15:05 crc kubenswrapper[4904]: I0223 11:15:05.091955 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7fa5b2f-168c-4b9d-90c1-e60aa040930e-secret-volume\") pod \"c7fa5b2f-168c-4b9d-90c1-e60aa040930e\" (UID: \"c7fa5b2f-168c-4b9d-90c1-e60aa040930e\") " Feb 23 11:15:05 crc kubenswrapper[4904]: I0223 11:15:05.092113 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7fa5b2f-168c-4b9d-90c1-e60aa040930e-config-volume\") pod \"c7fa5b2f-168c-4b9d-90c1-e60aa040930e\" (UID: \"c7fa5b2f-168c-4b9d-90c1-e60aa040930e\") " Feb 23 11:15:05 crc kubenswrapper[4904]: I0223 11:15:05.092898 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7fa5b2f-168c-4b9d-90c1-e60aa040930e-config-volume" (OuterVolumeSpecName: "config-volume") pod "c7fa5b2f-168c-4b9d-90c1-e60aa040930e" (UID: "c7fa5b2f-168c-4b9d-90c1-e60aa040930e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 11:15:05 crc kubenswrapper[4904]: I0223 11:15:05.101128 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7fa5b2f-168c-4b9d-90c1-e60aa040930e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c7fa5b2f-168c-4b9d-90c1-e60aa040930e" (UID: "c7fa5b2f-168c-4b9d-90c1-e60aa040930e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 11:15:05 crc kubenswrapper[4904]: I0223 11:15:05.101156 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7fa5b2f-168c-4b9d-90c1-e60aa040930e-kube-api-access-qbf8k" (OuterVolumeSpecName: "kube-api-access-qbf8k") pod "c7fa5b2f-168c-4b9d-90c1-e60aa040930e" (UID: "c7fa5b2f-168c-4b9d-90c1-e60aa040930e"). InnerVolumeSpecName "kube-api-access-qbf8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 11:15:05 crc kubenswrapper[4904]: I0223 11:15:05.194365 4904 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7fa5b2f-168c-4b9d-90c1-e60aa040930e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 11:15:05 crc kubenswrapper[4904]: I0223 11:15:05.194400 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbf8k\" (UniqueName: \"kubernetes.io/projected/c7fa5b2f-168c-4b9d-90c1-e60aa040930e-kube-api-access-qbf8k\") on node \"crc\" DevicePath \"\"" Feb 23 11:15:05 crc kubenswrapper[4904]: I0223 11:15:05.194411 4904 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7fa5b2f-168c-4b9d-90c1-e60aa040930e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 11:15:05 crc kubenswrapper[4904]: I0223 11:15:05.451157 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530755-j6pw2" event={"ID":"c7fa5b2f-168c-4b9d-90c1-e60aa040930e","Type":"ContainerDied","Data":"f48f5389304ebfa5d3656bea369d80000708fa7ee57b5c85e53c81c21d1a392d"} Feb 23 11:15:05 crc kubenswrapper[4904]: I0223 11:15:05.451215 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f48f5389304ebfa5d3656bea369d80000708fa7ee57b5c85e53c81c21d1a392d" Feb 23 11:15:05 crc kubenswrapper[4904]: I0223 11:15:05.451271 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530755-j6pw2" Feb 23 11:15:05 crc kubenswrapper[4904]: I0223 11:15:05.986738 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530710-4cvl8"] Feb 23 11:15:05 crc kubenswrapper[4904]: I0223 11:15:05.997062 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530710-4cvl8"] Feb 23 11:15:07 crc kubenswrapper[4904]: I0223 11:15:07.275801 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a290347-5e68-4afc-b963-25404ea29fef" path="/var/lib/kubelet/pods/2a290347-5e68-4afc-b963-25404ea29fef/volumes" Feb 23 11:15:17 crc kubenswrapper[4904]: I0223 11:15:17.398816 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 11:15:17 crc kubenswrapper[4904]: I0223 11:15:17.399443 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 11:15:47 crc kubenswrapper[4904]: I0223 11:15:47.221366 4904 scope.go:117] "RemoveContainer" containerID="2980df3aec0c1757ce98319bcc166a182bc963ac12a97c3da265b4662cdefef2" Feb 23 11:15:47 crc kubenswrapper[4904]: I0223 11:15:47.398496 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 11:15:47 crc kubenswrapper[4904]: I0223 11:15:47.398948 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 11:15:47 crc kubenswrapper[4904]: I0223 11:15:47.399011 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 11:15:47 crc kubenswrapper[4904]: I0223 11:15:47.399978 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"98694295b4fea5a54f7ffa6e8e5ab7cbbafee04fc0e285681141bc61a4996c46"} pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 11:15:47 crc kubenswrapper[4904]: I0223 11:15:47.400070 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" containerID="cri-o://98694295b4fea5a54f7ffa6e8e5ab7cbbafee04fc0e285681141bc61a4996c46" gracePeriod=600 Feb 23 11:15:48 crc kubenswrapper[4904]: I0223 11:15:48.021400 4904 generic.go:334] "Generic (PLEG): container finished" podID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerID="98694295b4fea5a54f7ffa6e8e5ab7cbbafee04fc0e285681141bc61a4996c46" exitCode=0 Feb 23 11:15:48 crc kubenswrapper[4904]: I0223 11:15:48.021488 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerDied","Data":"98694295b4fea5a54f7ffa6e8e5ab7cbbafee04fc0e285681141bc61a4996c46"} Feb 23 11:15:48 crc kubenswrapper[4904]: I0223 11:15:48.021818 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerStarted","Data":"5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063"} Feb 23 11:15:48 crc kubenswrapper[4904]: I0223 11:15:48.021860 4904 scope.go:117] "RemoveContainer" containerID="5a4ebdaa1b259795d9334c95243e86c7ce3b29d428016052d2458e4196518f23" Feb 23 11:16:36 crc kubenswrapper[4904]: I0223 11:16:36.829348 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f5r5b"] Feb 23 11:16:36 crc kubenswrapper[4904]: E0223 11:16:36.830937 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7fa5b2f-168c-4b9d-90c1-e60aa040930e" containerName="collect-profiles" Feb 23 11:16:36 crc kubenswrapper[4904]: I0223 11:16:36.830988 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7fa5b2f-168c-4b9d-90c1-e60aa040930e" containerName="collect-profiles" Feb 23 11:16:36 crc kubenswrapper[4904]: I0223 11:16:36.831519 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7fa5b2f-168c-4b9d-90c1-e60aa040930e" containerName="collect-profiles" Feb 23 11:16:36 crc kubenswrapper[4904]: I0223 11:16:36.833888 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5r5b" Feb 23 11:16:36 crc kubenswrapper[4904]: I0223 11:16:36.847649 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f5r5b"] Feb 23 11:16:36 crc kubenswrapper[4904]: I0223 11:16:36.970527 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwdxd\" (UniqueName: \"kubernetes.io/projected/e12e31b2-8886-4f14-85ef-f791dcb4159e-kube-api-access-nwdxd\") pod \"certified-operators-f5r5b\" (UID: \"e12e31b2-8886-4f14-85ef-f791dcb4159e\") " pod="openshift-marketplace/certified-operators-f5r5b" Feb 23 11:16:36 crc kubenswrapper[4904]: I0223 11:16:36.970600 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e12e31b2-8886-4f14-85ef-f791dcb4159e-catalog-content\") pod \"certified-operators-f5r5b\" (UID: \"e12e31b2-8886-4f14-85ef-f791dcb4159e\") " pod="openshift-marketplace/certified-operators-f5r5b" Feb 23 11:16:36 crc kubenswrapper[4904]: I0223 11:16:36.970622 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e12e31b2-8886-4f14-85ef-f791dcb4159e-utilities\") pod \"certified-operators-f5r5b\" (UID: \"e12e31b2-8886-4f14-85ef-f791dcb4159e\") " pod="openshift-marketplace/certified-operators-f5r5b" Feb 23 11:16:37 crc kubenswrapper[4904]: I0223 11:16:37.073012 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwdxd\" (UniqueName: \"kubernetes.io/projected/e12e31b2-8886-4f14-85ef-f791dcb4159e-kube-api-access-nwdxd\") pod \"certified-operators-f5r5b\" (UID: \"e12e31b2-8886-4f14-85ef-f791dcb4159e\") " pod="openshift-marketplace/certified-operators-f5r5b" Feb 23 11:16:37 crc kubenswrapper[4904]: I0223 11:16:37.073499 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e12e31b2-8886-4f14-85ef-f791dcb4159e-catalog-content\") pod \"certified-operators-f5r5b\" (UID: \"e12e31b2-8886-4f14-85ef-f791dcb4159e\") " pod="openshift-marketplace/certified-operators-f5r5b" Feb 23 11:16:37 crc kubenswrapper[4904]: I0223 11:16:37.073526 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e12e31b2-8886-4f14-85ef-f791dcb4159e-utilities\") pod \"certified-operators-f5r5b\" (UID: \"e12e31b2-8886-4f14-85ef-f791dcb4159e\") " pod="openshift-marketplace/certified-operators-f5r5b" Feb 23 11:16:37 crc kubenswrapper[4904]: I0223 11:16:37.074090 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e12e31b2-8886-4f14-85ef-f791dcb4159e-utilities\") pod \"certified-operators-f5r5b\" (UID: \"e12e31b2-8886-4f14-85ef-f791dcb4159e\") " pod="openshift-marketplace/certified-operators-f5r5b" Feb 23 11:16:37 crc kubenswrapper[4904]: I0223 11:16:37.074079 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e12e31b2-8886-4f14-85ef-f791dcb4159e-catalog-content\") pod \"certified-operators-f5r5b\" (UID: \"e12e31b2-8886-4f14-85ef-f791dcb4159e\") " pod="openshift-marketplace/certified-operators-f5r5b" Feb 23 11:16:37 crc kubenswrapper[4904]: I0223 11:16:37.095671 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwdxd\" (UniqueName: \"kubernetes.io/projected/e12e31b2-8886-4f14-85ef-f791dcb4159e-kube-api-access-nwdxd\") pod \"certified-operators-f5r5b\" (UID: \"e12e31b2-8886-4f14-85ef-f791dcb4159e\") " pod="openshift-marketplace/certified-operators-f5r5b" Feb 23 11:16:37 crc kubenswrapper[4904]: I0223 11:16:37.157411 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5r5b" Feb 23 11:16:37 crc kubenswrapper[4904]: I0223 11:16:37.744967 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f5r5b"] Feb 23 11:16:38 crc kubenswrapper[4904]: I0223 11:16:38.530496 4904 generic.go:334] "Generic (PLEG): container finished" podID="e12e31b2-8886-4f14-85ef-f791dcb4159e" containerID="8d53220d0a5c30d5790f2b03f89a049f931fedcb40bdc46910bbd3bae1fed23b" exitCode=0 Feb 23 11:16:38 crc kubenswrapper[4904]: I0223 11:16:38.530573 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5r5b" event={"ID":"e12e31b2-8886-4f14-85ef-f791dcb4159e","Type":"ContainerDied","Data":"8d53220d0a5c30d5790f2b03f89a049f931fedcb40bdc46910bbd3bae1fed23b"} Feb 23 11:16:38 crc kubenswrapper[4904]: I0223 11:16:38.530824 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5r5b" event={"ID":"e12e31b2-8886-4f14-85ef-f791dcb4159e","Type":"ContainerStarted","Data":"283fe21b775f9350c018d19c8dce865b3b3797266c02314d491017ea959a49fc"} Feb 23 11:16:38 crc kubenswrapper[4904]: I0223 11:16:38.532804 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 11:16:40 crc kubenswrapper[4904]: I0223 11:16:40.551317 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5r5b" event={"ID":"e12e31b2-8886-4f14-85ef-f791dcb4159e","Type":"ContainerStarted","Data":"a9703d1b2366c8dbea2f2af673a54d2c64d20c7407007b4488afc372841b46c0"} Feb 23 11:16:42 crc kubenswrapper[4904]: E0223 11:16:42.014038 4904 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode12e31b2_8886_4f14_85ef_f791dcb4159e.slice/crio-conmon-a9703d1b2366c8dbea2f2af673a54d2c64d20c7407007b4488afc372841b46c0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode12e31b2_8886_4f14_85ef_f791dcb4159e.slice/crio-a9703d1b2366c8dbea2f2af673a54d2c64d20c7407007b4488afc372841b46c0.scope\": RecentStats: unable to find data in memory cache]" Feb 23 11:16:42 crc kubenswrapper[4904]: I0223 11:16:42.575238 4904 generic.go:334] "Generic (PLEG): container finished" podID="e12e31b2-8886-4f14-85ef-f791dcb4159e" containerID="a9703d1b2366c8dbea2f2af673a54d2c64d20c7407007b4488afc372841b46c0" exitCode=0 Feb 23 11:16:42 crc kubenswrapper[4904]: I0223 11:16:42.575295 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5r5b" event={"ID":"e12e31b2-8886-4f14-85ef-f791dcb4159e","Type":"ContainerDied","Data":"a9703d1b2366c8dbea2f2af673a54d2c64d20c7407007b4488afc372841b46c0"} Feb 23 11:16:43 crc kubenswrapper[4904]: I0223 11:16:43.594098 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5r5b" event={"ID":"e12e31b2-8886-4f14-85ef-f791dcb4159e","Type":"ContainerStarted","Data":"5347b32dea47b78a9e1f244f23f8764c035b5b966f61259ca919ba348f07acbe"} Feb 23 11:16:43 crc kubenswrapper[4904]: I0223 11:16:43.622676 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f5r5b" podStartSLOduration=3.015219676 podStartE2EDuration="7.622658799s" podCreationTimestamp="2026-02-23 11:16:36 +0000 UTC" firstStartedPulling="2026-02-23 11:16:38.532547252 +0000 UTC m=+4231.952920765" lastFinishedPulling="2026-02-23 11:16:43.139986365 +0000 UTC m=+4236.560359888" observedRunningTime="2026-02-23 11:16:43.618961273 +0000 UTC m=+4237.039334786" watchObservedRunningTime="2026-02-23 11:16:43.622658799 +0000 UTC m=+4237.043032322" Feb 23 11:16:47 crc kubenswrapper[4904]: I0223 11:16:47.158311 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f5r5b" Feb 23 11:16:47 crc kubenswrapper[4904]: I0223 11:16:47.158651 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f5r5b" Feb 23 11:16:47 crc kubenswrapper[4904]: I0223 11:16:47.230085 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f5r5b" Feb 23 11:16:57 crc kubenswrapper[4904]: I0223 11:16:57.249356 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f5r5b" Feb 23 11:16:57 crc kubenswrapper[4904]: I0223 11:16:57.337160 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f5r5b"] Feb 23 11:16:57 crc kubenswrapper[4904]: I0223 11:16:57.729225 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f5r5b" podUID="e12e31b2-8886-4f14-85ef-f791dcb4159e" containerName="registry-server" containerID="cri-o://5347b32dea47b78a9e1f244f23f8764c035b5b966f61259ca919ba348f07acbe" gracePeriod=2 Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.280675 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5r5b" Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.444557 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e12e31b2-8886-4f14-85ef-f791dcb4159e-utilities\") pod \"e12e31b2-8886-4f14-85ef-f791dcb4159e\" (UID: \"e12e31b2-8886-4f14-85ef-f791dcb4159e\") " Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.444853 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwdxd\" (UniqueName: \"kubernetes.io/projected/e12e31b2-8886-4f14-85ef-f791dcb4159e-kube-api-access-nwdxd\") pod \"e12e31b2-8886-4f14-85ef-f791dcb4159e\" (UID: \"e12e31b2-8886-4f14-85ef-f791dcb4159e\") " Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.444981 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e12e31b2-8886-4f14-85ef-f791dcb4159e-catalog-content\") pod \"e12e31b2-8886-4f14-85ef-f791dcb4159e\" (UID: \"e12e31b2-8886-4f14-85ef-f791dcb4159e\") " Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.445784 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e12e31b2-8886-4f14-85ef-f791dcb4159e-utilities" (OuterVolumeSpecName: "utilities") pod "e12e31b2-8886-4f14-85ef-f791dcb4159e" (UID: "e12e31b2-8886-4f14-85ef-f791dcb4159e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.452326 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e12e31b2-8886-4f14-85ef-f791dcb4159e-kube-api-access-nwdxd" (OuterVolumeSpecName: "kube-api-access-nwdxd") pod "e12e31b2-8886-4f14-85ef-f791dcb4159e" (UID: "e12e31b2-8886-4f14-85ef-f791dcb4159e"). InnerVolumeSpecName "kube-api-access-nwdxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.493076 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e12e31b2-8886-4f14-85ef-f791dcb4159e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e12e31b2-8886-4f14-85ef-f791dcb4159e" (UID: "e12e31b2-8886-4f14-85ef-f791dcb4159e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.547704 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e12e31b2-8886-4f14-85ef-f791dcb4159e-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.547766 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwdxd\" (UniqueName: \"kubernetes.io/projected/e12e31b2-8886-4f14-85ef-f791dcb4159e-kube-api-access-nwdxd\") on node \"crc\" DevicePath \"\"" Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.547781 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e12e31b2-8886-4f14-85ef-f791dcb4159e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.753673 4904 generic.go:334] "Generic (PLEG): container finished" podID="e12e31b2-8886-4f14-85ef-f791dcb4159e" containerID="5347b32dea47b78a9e1f244f23f8764c035b5b966f61259ca919ba348f07acbe" exitCode=0 Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.753742 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5r5b" event={"ID":"e12e31b2-8886-4f14-85ef-f791dcb4159e","Type":"ContainerDied","Data":"5347b32dea47b78a9e1f244f23f8764c035b5b966f61259ca919ba348f07acbe"} Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.753773 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5r5b" event={"ID":"e12e31b2-8886-4f14-85ef-f791dcb4159e","Type":"ContainerDied","Data":"283fe21b775f9350c018d19c8dce865b3b3797266c02314d491017ea959a49fc"} Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.753789 4904 scope.go:117] "RemoveContainer" containerID="5347b32dea47b78a9e1f244f23f8764c035b5b966f61259ca919ba348f07acbe" Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.753863 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5r5b" Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.797828 4904 scope.go:117] "RemoveContainer" containerID="a9703d1b2366c8dbea2f2af673a54d2c64d20c7407007b4488afc372841b46c0" Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.808983 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f5r5b"] Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.818249 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f5r5b"] Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.827770 4904 scope.go:117] "RemoveContainer" containerID="8d53220d0a5c30d5790f2b03f89a049f931fedcb40bdc46910bbd3bae1fed23b" Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.876191 4904 scope.go:117] "RemoveContainer" containerID="5347b32dea47b78a9e1f244f23f8764c035b5b966f61259ca919ba348f07acbe" Feb 23 11:16:58 crc kubenswrapper[4904]: E0223 11:16:58.877033 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5347b32dea47b78a9e1f244f23f8764c035b5b966f61259ca919ba348f07acbe\": container with ID starting with 5347b32dea47b78a9e1f244f23f8764c035b5b966f61259ca919ba348f07acbe not found: ID does not exist" containerID="5347b32dea47b78a9e1f244f23f8764c035b5b966f61259ca919ba348f07acbe" Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.877097 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5347b32dea47b78a9e1f244f23f8764c035b5b966f61259ca919ba348f07acbe"} err="failed to get container status \"5347b32dea47b78a9e1f244f23f8764c035b5b966f61259ca919ba348f07acbe\": rpc error: code = NotFound desc = could not find container \"5347b32dea47b78a9e1f244f23f8764c035b5b966f61259ca919ba348f07acbe\": container with ID starting with 5347b32dea47b78a9e1f244f23f8764c035b5b966f61259ca919ba348f07acbe not found: ID does not exist" Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.877141 4904 scope.go:117] "RemoveContainer" containerID="a9703d1b2366c8dbea2f2af673a54d2c64d20c7407007b4488afc372841b46c0" Feb 23 11:16:58 crc kubenswrapper[4904]: E0223 11:16:58.877789 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9703d1b2366c8dbea2f2af673a54d2c64d20c7407007b4488afc372841b46c0\": container with ID starting with a9703d1b2366c8dbea2f2af673a54d2c64d20c7407007b4488afc372841b46c0 not found: ID does not exist" containerID="a9703d1b2366c8dbea2f2af673a54d2c64d20c7407007b4488afc372841b46c0" Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.877828 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9703d1b2366c8dbea2f2af673a54d2c64d20c7407007b4488afc372841b46c0"} err="failed to get container status \"a9703d1b2366c8dbea2f2af673a54d2c64d20c7407007b4488afc372841b46c0\": rpc error: code = NotFound desc = could not find container \"a9703d1b2366c8dbea2f2af673a54d2c64d20c7407007b4488afc372841b46c0\": container with ID starting with a9703d1b2366c8dbea2f2af673a54d2c64d20c7407007b4488afc372841b46c0 not found: ID does not exist" Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.877857 4904 scope.go:117] "RemoveContainer" containerID="8d53220d0a5c30d5790f2b03f89a049f931fedcb40bdc46910bbd3bae1fed23b" Feb 23 11:16:58 crc kubenswrapper[4904]: E0223 11:16:58.878428 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d53220d0a5c30d5790f2b03f89a049f931fedcb40bdc46910bbd3bae1fed23b\": container with ID starting with 8d53220d0a5c30d5790f2b03f89a049f931fedcb40bdc46910bbd3bae1fed23b not found: ID does not exist" containerID="8d53220d0a5c30d5790f2b03f89a049f931fedcb40bdc46910bbd3bae1fed23b" Feb 23 11:16:58 crc kubenswrapper[4904]: I0223 11:16:58.878458 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d53220d0a5c30d5790f2b03f89a049f931fedcb40bdc46910bbd3bae1fed23b"} err="failed to get container status \"8d53220d0a5c30d5790f2b03f89a049f931fedcb40bdc46910bbd3bae1fed23b\": rpc error: code = NotFound desc = could not find container \"8d53220d0a5c30d5790f2b03f89a049f931fedcb40bdc46910bbd3bae1fed23b\": container with ID starting with 8d53220d0a5c30d5790f2b03f89a049f931fedcb40bdc46910bbd3bae1fed23b not found: ID does not exist" Feb 23 11:16:59 crc kubenswrapper[4904]: I0223 11:16:59.276803 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e12e31b2-8886-4f14-85ef-f791dcb4159e" path="/var/lib/kubelet/pods/e12e31b2-8886-4f14-85ef-f791dcb4159e/volumes" Feb 23 11:17:47 crc kubenswrapper[4904]: I0223 11:17:47.398850 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 11:17:47 crc kubenswrapper[4904]: I0223 11:17:47.399553 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 11:18:16 crc kubenswrapper[4904]: I0223 11:18:16.012627 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lkvc8"] Feb 23 11:18:16 crc kubenswrapper[4904]: E0223 11:18:16.013800 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12e31b2-8886-4f14-85ef-f791dcb4159e" containerName="extract-utilities" Feb 23 11:18:16 crc kubenswrapper[4904]: I0223 11:18:16.013820 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12e31b2-8886-4f14-85ef-f791dcb4159e" containerName="extract-utilities" Feb 23 11:18:16 crc kubenswrapper[4904]: E0223 11:18:16.013849 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12e31b2-8886-4f14-85ef-f791dcb4159e" containerName="extract-content" Feb 23 11:18:16 crc kubenswrapper[4904]: I0223 11:18:16.013859 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12e31b2-8886-4f14-85ef-f791dcb4159e" containerName="extract-content" Feb 23 11:18:16 crc kubenswrapper[4904]: E0223 11:18:16.013902 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12e31b2-8886-4f14-85ef-f791dcb4159e" containerName="registry-server" Feb 23 11:18:16 crc kubenswrapper[4904]: I0223 11:18:16.013911 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12e31b2-8886-4f14-85ef-f791dcb4159e" containerName="registry-server" Feb 23 11:18:16 crc kubenswrapper[4904]: I0223 11:18:16.014281 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12e31b2-8886-4f14-85ef-f791dcb4159e" containerName="registry-server" Feb 23 11:18:16 crc kubenswrapper[4904]: I0223 11:18:16.016439 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lkvc8" Feb 23 11:18:16 crc kubenswrapper[4904]: I0223 11:18:16.027960 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lkvc8"] Feb 23 11:18:16 crc kubenswrapper[4904]: I0223 11:18:16.136604 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krx67\" (UniqueName: \"kubernetes.io/projected/fce99428-6e11-4229-b4de-e2bc8e84f4eb-kube-api-access-krx67\") pod \"community-operators-lkvc8\" (UID: \"fce99428-6e11-4229-b4de-e2bc8e84f4eb\") " pod="openshift-marketplace/community-operators-lkvc8" Feb 23 11:18:16 crc kubenswrapper[4904]: I0223 11:18:16.136684 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fce99428-6e11-4229-b4de-e2bc8e84f4eb-utilities\") pod \"community-operators-lkvc8\" (UID: \"fce99428-6e11-4229-b4de-e2bc8e84f4eb\") " pod="openshift-marketplace/community-operators-lkvc8" Feb 23 11:18:16 crc kubenswrapper[4904]: I0223 11:18:16.137163 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fce99428-6e11-4229-b4de-e2bc8e84f4eb-catalog-content\") pod \"community-operators-lkvc8\" (UID: \"fce99428-6e11-4229-b4de-e2bc8e84f4eb\") " pod="openshift-marketplace/community-operators-lkvc8" Feb 23 11:18:16 crc kubenswrapper[4904]: I0223 11:18:16.238642 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fce99428-6e11-4229-b4de-e2bc8e84f4eb-catalog-content\") pod \"community-operators-lkvc8\" (UID: \"fce99428-6e11-4229-b4de-e2bc8e84f4eb\") " pod="openshift-marketplace/community-operators-lkvc8" Feb 23 11:18:16 crc kubenswrapper[4904]: I0223 11:18:16.238762 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krx67\" (UniqueName: \"kubernetes.io/projected/fce99428-6e11-4229-b4de-e2bc8e84f4eb-kube-api-access-krx67\") pod \"community-operators-lkvc8\" (UID: \"fce99428-6e11-4229-b4de-e2bc8e84f4eb\") " pod="openshift-marketplace/community-operators-lkvc8" Feb 23 11:18:16 crc kubenswrapper[4904]: I0223 11:18:16.238791 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fce99428-6e11-4229-b4de-e2bc8e84f4eb-utilities\") pod \"community-operators-lkvc8\" (UID: \"fce99428-6e11-4229-b4de-e2bc8e84f4eb\") " pod="openshift-marketplace/community-operators-lkvc8" Feb 23 11:18:16 crc kubenswrapper[4904]: I0223 11:18:16.239243 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fce99428-6e11-4229-b4de-e2bc8e84f4eb-utilities\") pod \"community-operators-lkvc8\" (UID: \"fce99428-6e11-4229-b4de-e2bc8e84f4eb\") " pod="openshift-marketplace/community-operators-lkvc8" Feb 23 11:18:16 crc kubenswrapper[4904]: I0223 11:18:16.239858 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fce99428-6e11-4229-b4de-e2bc8e84f4eb-catalog-content\") pod \"community-operators-lkvc8\" (UID: \"fce99428-6e11-4229-b4de-e2bc8e84f4eb\") " pod="openshift-marketplace/community-operators-lkvc8" Feb 23 11:18:16 crc kubenswrapper[4904]: I0223 11:18:16.268106 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krx67\" (UniqueName: \"kubernetes.io/projected/fce99428-6e11-4229-b4de-e2bc8e84f4eb-kube-api-access-krx67\") pod \"community-operators-lkvc8\" (UID: \"fce99428-6e11-4229-b4de-e2bc8e84f4eb\") " pod="openshift-marketplace/community-operators-lkvc8" Feb 23 11:18:16 crc kubenswrapper[4904]: I0223 11:18:16.354217 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lkvc8" Feb 23 11:18:16 crc kubenswrapper[4904]: I0223 11:18:16.918000 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lkvc8"] Feb 23 11:18:17 crc kubenswrapper[4904]: I0223 11:18:17.398163 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 11:18:17 crc kubenswrapper[4904]: I0223 11:18:17.398616 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 11:18:17 crc kubenswrapper[4904]: I0223 11:18:17.550697 4904 generic.go:334] "Generic (PLEG): container finished" podID="fce99428-6e11-4229-b4de-e2bc8e84f4eb" containerID="1e263cb08692bbf142eb51bbe872e8af43b788f4218c010f1308740f7ec3f7a2" exitCode=0 Feb 23 11:18:17 crc kubenswrapper[4904]: I0223 11:18:17.550747 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkvc8" event={"ID":"fce99428-6e11-4229-b4de-e2bc8e84f4eb","Type":"ContainerDied","Data":"1e263cb08692bbf142eb51bbe872e8af43b788f4218c010f1308740f7ec3f7a2"} Feb 23 11:18:17 crc kubenswrapper[4904]: I0223 11:18:17.550792 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkvc8" event={"ID":"fce99428-6e11-4229-b4de-e2bc8e84f4eb","Type":"ContainerStarted","Data":"8bce5dd13b091b1415add150e96abf351b44009b4af8668c494d25fb5ca590ec"} Feb 23 11:18:18 crc kubenswrapper[4904]: I0223 11:18:18.567515 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkvc8" event={"ID":"fce99428-6e11-4229-b4de-e2bc8e84f4eb","Type":"ContainerStarted","Data":"bad173555a7399f2f91778ed1624e1b5f1472c6e6037c96fbd8cd1c44de9c89d"} Feb 23 11:18:19 crc kubenswrapper[4904]: I0223 11:18:19.577976 4904 generic.go:334] "Generic (PLEG): container finished" podID="fce99428-6e11-4229-b4de-e2bc8e84f4eb" containerID="bad173555a7399f2f91778ed1624e1b5f1472c6e6037c96fbd8cd1c44de9c89d" exitCode=0 Feb 23 11:18:19 crc kubenswrapper[4904]: I0223 11:18:19.578076 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkvc8" event={"ID":"fce99428-6e11-4229-b4de-e2bc8e84f4eb","Type":"ContainerDied","Data":"bad173555a7399f2f91778ed1624e1b5f1472c6e6037c96fbd8cd1c44de9c89d"} Feb 23 11:18:20 crc kubenswrapper[4904]: I0223 11:18:20.591708 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkvc8" event={"ID":"fce99428-6e11-4229-b4de-e2bc8e84f4eb","Type":"ContainerStarted","Data":"8905a6839c4ef25249550fb432bc13fb31c7060db6ef706663fba397b670407a"} Feb 23 11:18:20 crc kubenswrapper[4904]: I0223 11:18:20.619319 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lkvc8" podStartSLOduration=3.193055209 podStartE2EDuration="5.619295692s" podCreationTimestamp="2026-02-23 11:18:15 +0000 UTC" firstStartedPulling="2026-02-23 11:18:17.553839392 +0000 UTC m=+4330.974212915" lastFinishedPulling="2026-02-23 11:18:19.980079835 +0000 UTC m=+4333.400453398" observedRunningTime="2026-02-23 11:18:20.613057524 +0000 UTC m=+4334.033431047" watchObservedRunningTime="2026-02-23 11:18:20.619295692 +0000 UTC m=+4334.039669215" Feb 23 11:18:26 crc kubenswrapper[4904]: I0223 11:18:26.354652 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lkvc8" Feb 23 11:18:26 crc kubenswrapper[4904]: I0223 11:18:26.355445 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lkvc8" Feb 23 11:18:26 crc kubenswrapper[4904]: I0223 11:18:26.414366 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lkvc8" Feb 23 11:18:26 crc kubenswrapper[4904]: I0223 11:18:26.704537 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lkvc8" Feb 23 11:18:26 crc kubenswrapper[4904]: I0223 11:18:26.762729 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lkvc8"] Feb 23 11:18:28 crc kubenswrapper[4904]: I0223 11:18:28.670649 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lkvc8" podUID="fce99428-6e11-4229-b4de-e2bc8e84f4eb" containerName="registry-server" containerID="cri-o://8905a6839c4ef25249550fb432bc13fb31c7060db6ef706663fba397b670407a" gracePeriod=2 Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.144004 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lkvc8" Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.245257 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fce99428-6e11-4229-b4de-e2bc8e84f4eb-catalog-content\") pod \"fce99428-6e11-4229-b4de-e2bc8e84f4eb\" (UID: \"fce99428-6e11-4229-b4de-e2bc8e84f4eb\") " Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.245339 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fce99428-6e11-4229-b4de-e2bc8e84f4eb-utilities\") pod \"fce99428-6e11-4229-b4de-e2bc8e84f4eb\" (UID: \"fce99428-6e11-4229-b4de-e2bc8e84f4eb\") " Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.245396 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krx67\" (UniqueName: \"kubernetes.io/projected/fce99428-6e11-4229-b4de-e2bc8e84f4eb-kube-api-access-krx67\") pod \"fce99428-6e11-4229-b4de-e2bc8e84f4eb\" (UID: \"fce99428-6e11-4229-b4de-e2bc8e84f4eb\") " Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.247676 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fce99428-6e11-4229-b4de-e2bc8e84f4eb-utilities" (OuterVolumeSpecName: "utilities") pod "fce99428-6e11-4229-b4de-e2bc8e84f4eb" (UID: "fce99428-6e11-4229-b4de-e2bc8e84f4eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.256379 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fce99428-6e11-4229-b4de-e2bc8e84f4eb-kube-api-access-krx67" (OuterVolumeSpecName: "kube-api-access-krx67") pod "fce99428-6e11-4229-b4de-e2bc8e84f4eb" (UID: "fce99428-6e11-4229-b4de-e2bc8e84f4eb"). InnerVolumeSpecName "kube-api-access-krx67". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.316319 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fce99428-6e11-4229-b4de-e2bc8e84f4eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fce99428-6e11-4229-b4de-e2bc8e84f4eb" (UID: "fce99428-6e11-4229-b4de-e2bc8e84f4eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.347868 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fce99428-6e11-4229-b4de-e2bc8e84f4eb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.347912 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fce99428-6e11-4229-b4de-e2bc8e84f4eb-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.347926 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krx67\" (UniqueName: \"kubernetes.io/projected/fce99428-6e11-4229-b4de-e2bc8e84f4eb-kube-api-access-krx67\") on node \"crc\" DevicePath \"\"" Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.683003 4904 generic.go:334] "Generic (PLEG): container finished" podID="fce99428-6e11-4229-b4de-e2bc8e84f4eb" containerID="8905a6839c4ef25249550fb432bc13fb31c7060db6ef706663fba397b670407a" exitCode=0 Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.683066 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkvc8" event={"ID":"fce99428-6e11-4229-b4de-e2bc8e84f4eb","Type":"ContainerDied","Data":"8905a6839c4ef25249550fb432bc13fb31c7060db6ef706663fba397b670407a"} Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.684939 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkvc8" event={"ID":"fce99428-6e11-4229-b4de-e2bc8e84f4eb","Type":"ContainerDied","Data":"8bce5dd13b091b1415add150e96abf351b44009b4af8668c494d25fb5ca590ec"} Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.683146 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lkvc8" Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.685007 4904 scope.go:117] "RemoveContainer" containerID="8905a6839c4ef25249550fb432bc13fb31c7060db6ef706663fba397b670407a" Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.726671 4904 scope.go:117] "RemoveContainer" containerID="bad173555a7399f2f91778ed1624e1b5f1472c6e6037c96fbd8cd1c44de9c89d" Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.731891 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lkvc8"] Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.749129 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lkvc8"] Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.752541 4904 scope.go:117] "RemoveContainer" containerID="1e263cb08692bbf142eb51bbe872e8af43b788f4218c010f1308740f7ec3f7a2" Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.807820 4904 scope.go:117] "RemoveContainer" containerID="8905a6839c4ef25249550fb432bc13fb31c7060db6ef706663fba397b670407a" Feb 23 11:18:29 crc kubenswrapper[4904]: E0223 11:18:29.808390 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8905a6839c4ef25249550fb432bc13fb31c7060db6ef706663fba397b670407a\": container with ID starting with 8905a6839c4ef25249550fb432bc13fb31c7060db6ef706663fba397b670407a not found: ID does not exist" containerID="8905a6839c4ef25249550fb432bc13fb31c7060db6ef706663fba397b670407a" Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.808448 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8905a6839c4ef25249550fb432bc13fb31c7060db6ef706663fba397b670407a"} err="failed to get container status \"8905a6839c4ef25249550fb432bc13fb31c7060db6ef706663fba397b670407a\": rpc error: code = NotFound desc = could not find container \"8905a6839c4ef25249550fb432bc13fb31c7060db6ef706663fba397b670407a\": container with ID starting with 8905a6839c4ef25249550fb432bc13fb31c7060db6ef706663fba397b670407a not found: ID does not exist" Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.808472 4904 scope.go:117] "RemoveContainer" containerID="bad173555a7399f2f91778ed1624e1b5f1472c6e6037c96fbd8cd1c44de9c89d" Feb 23 11:18:29 crc kubenswrapper[4904]: E0223 11:18:29.808866 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bad173555a7399f2f91778ed1624e1b5f1472c6e6037c96fbd8cd1c44de9c89d\": container with ID starting with bad173555a7399f2f91778ed1624e1b5f1472c6e6037c96fbd8cd1c44de9c89d not found: ID does not exist" containerID="bad173555a7399f2f91778ed1624e1b5f1472c6e6037c96fbd8cd1c44de9c89d" Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.808913 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bad173555a7399f2f91778ed1624e1b5f1472c6e6037c96fbd8cd1c44de9c89d"} err="failed to get container status \"bad173555a7399f2f91778ed1624e1b5f1472c6e6037c96fbd8cd1c44de9c89d\": rpc error: code = NotFound desc = could not find container \"bad173555a7399f2f91778ed1624e1b5f1472c6e6037c96fbd8cd1c44de9c89d\": container with ID starting with bad173555a7399f2f91778ed1624e1b5f1472c6e6037c96fbd8cd1c44de9c89d not found: ID does not exist" Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.808930 4904 scope.go:117] "RemoveContainer" containerID="1e263cb08692bbf142eb51bbe872e8af43b788f4218c010f1308740f7ec3f7a2" Feb 23 11:18:29 crc kubenswrapper[4904]: E0223 11:18:29.809169 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e263cb08692bbf142eb51bbe872e8af43b788f4218c010f1308740f7ec3f7a2\": container with ID starting with 1e263cb08692bbf142eb51bbe872e8af43b788f4218c010f1308740f7ec3f7a2 not found: ID does not exist" containerID="1e263cb08692bbf142eb51bbe872e8af43b788f4218c010f1308740f7ec3f7a2" Feb 23 11:18:29 crc kubenswrapper[4904]: I0223 11:18:29.809195 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e263cb08692bbf142eb51bbe872e8af43b788f4218c010f1308740f7ec3f7a2"} err="failed to get container status \"1e263cb08692bbf142eb51bbe872e8af43b788f4218c010f1308740f7ec3f7a2\": rpc error: code = NotFound desc = could not find container \"1e263cb08692bbf142eb51bbe872e8af43b788f4218c010f1308740f7ec3f7a2\": container with ID starting with 1e263cb08692bbf142eb51bbe872e8af43b788f4218c010f1308740f7ec3f7a2 not found: ID does not exist" Feb 23 11:18:31 crc kubenswrapper[4904]: I0223 11:18:31.274814 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fce99428-6e11-4229-b4de-e2bc8e84f4eb" path="/var/lib/kubelet/pods/fce99428-6e11-4229-b4de-e2bc8e84f4eb/volumes" Feb 23 11:18:47 crc kubenswrapper[4904]: I0223 11:18:47.398302 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 11:18:47 crc kubenswrapper[4904]: I0223 11:18:47.398992 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 11:18:47 crc kubenswrapper[4904]: I0223 11:18:47.399045 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 11:18:47 crc kubenswrapper[4904]: I0223 11:18:47.399919 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063"} pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 11:18:47 crc kubenswrapper[4904]: I0223 11:18:47.399978 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" containerID="cri-o://5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" gracePeriod=600 Feb 23 11:18:47 crc kubenswrapper[4904]: E0223 11:18:47.528840 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:18:47 crc kubenswrapper[4904]: I0223 11:18:47.911467 4904 generic.go:334] "Generic (PLEG): container finished" podID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" exitCode=0 Feb 23 11:18:47 crc kubenswrapper[4904]: I0223 11:18:47.911546 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerDied","Data":"5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063"} Feb 23 11:18:47 crc kubenswrapper[4904]: I0223 11:18:47.911614 4904 scope.go:117] "RemoveContainer" containerID="98694295b4fea5a54f7ffa6e8e5ab7cbbafee04fc0e285681141bc61a4996c46" Feb 23 11:18:47 crc kubenswrapper[4904]: I0223 11:18:47.912131 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:18:47 crc kubenswrapper[4904]: E0223 11:18:47.912424 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:18:59 crc kubenswrapper[4904]: I0223 11:18:59.255337 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:18:59 crc kubenswrapper[4904]: E0223 11:18:59.256536 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:19:12 crc kubenswrapper[4904]: I0223 11:19:12.255626 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:19:12 crc kubenswrapper[4904]: E0223 11:19:12.258493 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:19:14 crc kubenswrapper[4904]: I0223 11:19:14.088739 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t4v62"] Feb 23 11:19:14 crc kubenswrapper[4904]: E0223 11:19:14.089705 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce99428-6e11-4229-b4de-e2bc8e84f4eb" containerName="registry-server" Feb 23 11:19:14 crc kubenswrapper[4904]: I0223 11:19:14.089750 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce99428-6e11-4229-b4de-e2bc8e84f4eb" containerName="registry-server" Feb 23 11:19:14 crc kubenswrapper[4904]: E0223 11:19:14.089887 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce99428-6e11-4229-b4de-e2bc8e84f4eb" containerName="extract-utilities" Feb 23 11:19:14 crc kubenswrapper[4904]: I0223 11:19:14.089945 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce99428-6e11-4229-b4de-e2bc8e84f4eb" containerName="extract-utilities" Feb 23 11:19:14 crc kubenswrapper[4904]: E0223 11:19:14.089966 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce99428-6e11-4229-b4de-e2bc8e84f4eb" containerName="extract-content" Feb 23 11:19:14 crc kubenswrapper[4904]: I0223 11:19:14.089974 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce99428-6e11-4229-b4de-e2bc8e84f4eb" containerName="extract-content" Feb 23 11:19:14 crc kubenswrapper[4904]: I0223 11:19:14.090468 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="fce99428-6e11-4229-b4de-e2bc8e84f4eb" containerName="registry-server" Feb 23 11:19:14 crc kubenswrapper[4904]: I0223 11:19:14.092064 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4v62" Feb 23 11:19:14 crc kubenswrapper[4904]: I0223 11:19:14.111134 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t4v62"] Feb 23 11:19:14 crc kubenswrapper[4904]: I0223 11:19:14.124569 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c84a6a-92b9-463c-a4d2-d06a5f81af07-catalog-content\") pod \"redhat-operators-t4v62\" (UID: \"31c84a6a-92b9-463c-a4d2-d06a5f81af07\") " pod="openshift-marketplace/redhat-operators-t4v62" Feb 23 11:19:14 crc kubenswrapper[4904]: I0223 11:19:14.124629 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vk9x\" (UniqueName: \"kubernetes.io/projected/31c84a6a-92b9-463c-a4d2-d06a5f81af07-kube-api-access-6vk9x\") pod \"redhat-operators-t4v62\" (UID: \"31c84a6a-92b9-463c-a4d2-d06a5f81af07\") " pod="openshift-marketplace/redhat-operators-t4v62" Feb 23 11:19:14 crc kubenswrapper[4904]: I0223 11:19:14.124776 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c84a6a-92b9-463c-a4d2-d06a5f81af07-utilities\") pod \"redhat-operators-t4v62\" (UID: \"31c84a6a-92b9-463c-a4d2-d06a5f81af07\") " pod="openshift-marketplace/redhat-operators-t4v62" Feb 23 11:19:14 crc kubenswrapper[4904]: I0223 11:19:14.226571 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c84a6a-92b9-463c-a4d2-d06a5f81af07-utilities\") pod \"redhat-operators-t4v62\" (UID: \"31c84a6a-92b9-463c-a4d2-d06a5f81af07\") " pod="openshift-marketplace/redhat-operators-t4v62" Feb 23 11:19:14 crc kubenswrapper[4904]: I0223 11:19:14.227023 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c84a6a-92b9-463c-a4d2-d06a5f81af07-catalog-content\") pod \"redhat-operators-t4v62\" (UID: \"31c84a6a-92b9-463c-a4d2-d06a5f81af07\") " pod="openshift-marketplace/redhat-operators-t4v62" Feb 23 11:19:14 crc kubenswrapper[4904]: I0223 11:19:14.227164 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vk9x\" (UniqueName: \"kubernetes.io/projected/31c84a6a-92b9-463c-a4d2-d06a5f81af07-kube-api-access-6vk9x\") pod \"redhat-operators-t4v62\" (UID: \"31c84a6a-92b9-463c-a4d2-d06a5f81af07\") " pod="openshift-marketplace/redhat-operators-t4v62" Feb 23 11:19:14 crc kubenswrapper[4904]: I0223 11:19:14.227191 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c84a6a-92b9-463c-a4d2-d06a5f81af07-utilities\") pod \"redhat-operators-t4v62\" (UID: \"31c84a6a-92b9-463c-a4d2-d06a5f81af07\") " pod="openshift-marketplace/redhat-operators-t4v62" Feb 23 11:19:14 crc kubenswrapper[4904]: I0223 11:19:14.227563 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c84a6a-92b9-463c-a4d2-d06a5f81af07-catalog-content\") pod \"redhat-operators-t4v62\" (UID: \"31c84a6a-92b9-463c-a4d2-d06a5f81af07\") " pod="openshift-marketplace/redhat-operators-t4v62" Feb 23 11:19:14 crc kubenswrapper[4904]: I0223 11:19:14.270929 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vk9x\" (UniqueName: \"kubernetes.io/projected/31c84a6a-92b9-463c-a4d2-d06a5f81af07-kube-api-access-6vk9x\") pod \"redhat-operators-t4v62\" (UID: \"31c84a6a-92b9-463c-a4d2-d06a5f81af07\") " pod="openshift-marketplace/redhat-operators-t4v62" Feb 23 11:19:14 crc kubenswrapper[4904]: I0223 11:19:14.412662 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4v62" Feb 23 11:19:14 crc kubenswrapper[4904]: I0223 11:19:14.946694 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t4v62"] Feb 23 11:19:14 crc kubenswrapper[4904]: W0223 11:19:14.962077 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31c84a6a_92b9_463c_a4d2_d06a5f81af07.slice/crio-aa52f30e46ebfbbe57583b42cbaf100d8bae569c198a5b7fc785bdef3541e720 WatchSource:0}: Error finding container aa52f30e46ebfbbe57583b42cbaf100d8bae569c198a5b7fc785bdef3541e720: Status 404 returned error can't find the container with id aa52f30e46ebfbbe57583b42cbaf100d8bae569c198a5b7fc785bdef3541e720 Feb 23 11:19:15 crc kubenswrapper[4904]: I0223 11:19:15.191789 4904 generic.go:334] "Generic (PLEG): container finished" podID="31c84a6a-92b9-463c-a4d2-d06a5f81af07" containerID="f75aac3b0b664853d5e5af74fc03ae569d91546d1049a2f7e731bc885e3c70d6" exitCode=0 Feb 23 11:19:15 crc kubenswrapper[4904]: I0223 11:19:15.191838 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4v62" event={"ID":"31c84a6a-92b9-463c-a4d2-d06a5f81af07","Type":"ContainerDied","Data":"f75aac3b0b664853d5e5af74fc03ae569d91546d1049a2f7e731bc885e3c70d6"} Feb 23 11:19:15 crc kubenswrapper[4904]: I0223 11:19:15.191866 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4v62" event={"ID":"31c84a6a-92b9-463c-a4d2-d06a5f81af07","Type":"ContainerStarted","Data":"aa52f30e46ebfbbe57583b42cbaf100d8bae569c198a5b7fc785bdef3541e720"} Feb 23 11:19:16 crc kubenswrapper[4904]: I0223 11:19:16.205923 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4v62" event={"ID":"31c84a6a-92b9-463c-a4d2-d06a5f81af07","Type":"ContainerStarted","Data":"ec5481cd4a9601d650d197cf85211bab39099bda7d64739db4249da6f046a536"} Feb 23 11:19:20 crc kubenswrapper[4904]: I0223 11:19:20.271933 4904 generic.go:334] "Generic (PLEG): container finished" podID="31c84a6a-92b9-463c-a4d2-d06a5f81af07" containerID="ec5481cd4a9601d650d197cf85211bab39099bda7d64739db4249da6f046a536" exitCode=0 Feb 23 11:19:20 crc kubenswrapper[4904]: I0223 11:19:20.272074 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4v62" event={"ID":"31c84a6a-92b9-463c-a4d2-d06a5f81af07","Type":"ContainerDied","Data":"ec5481cd4a9601d650d197cf85211bab39099bda7d64739db4249da6f046a536"} Feb 23 11:19:21 crc kubenswrapper[4904]: I0223 11:19:21.290378 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4v62" event={"ID":"31c84a6a-92b9-463c-a4d2-d06a5f81af07","Type":"ContainerStarted","Data":"fcdb84239b6bb971ea3b441fe69f323535bbbc4c30fa08d5600c553b46a887fe"} Feb 23 11:19:21 crc kubenswrapper[4904]: I0223 11:19:21.318372 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t4v62" podStartSLOduration=1.8257995550000001 podStartE2EDuration="7.31835069s" podCreationTimestamp="2026-02-23 11:19:14 +0000 UTC" firstStartedPulling="2026-02-23 11:19:15.193488693 +0000 UTC m=+4388.613862196" lastFinishedPulling="2026-02-23 11:19:20.686039808 +0000 UTC m=+4394.106413331" observedRunningTime="2026-02-23 11:19:21.307359507 +0000 UTC m=+4394.727733030" watchObservedRunningTime="2026-02-23 11:19:21.31835069 +0000 UTC m=+4394.738724223" Feb 23 11:19:23 crc kubenswrapper[4904]: I0223 11:19:23.255588 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:19:23 crc kubenswrapper[4904]: E0223 11:19:23.256174 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:19:24 crc kubenswrapper[4904]: I0223 11:19:24.414139 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t4v62" Feb 23 11:19:24 crc kubenswrapper[4904]: I0223 11:19:24.415604 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t4v62" Feb 23 11:19:25 crc kubenswrapper[4904]: I0223 11:19:25.481957 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t4v62" podUID="31c84a6a-92b9-463c-a4d2-d06a5f81af07" containerName="registry-server" probeResult="failure" output=< Feb 23 11:19:25 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Feb 23 11:19:25 crc kubenswrapper[4904]: > Feb 23 11:19:34 crc kubenswrapper[4904]: I0223 11:19:34.256760 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:19:34 crc kubenswrapper[4904]: E0223 11:19:34.258399 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:19:34 crc kubenswrapper[4904]: I0223 11:19:34.488319 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t4v62" Feb 23 11:19:34 crc kubenswrapper[4904]: I0223 11:19:34.541816 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t4v62" Feb 23 11:19:34 crc kubenswrapper[4904]: I0223 11:19:34.733303 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t4v62"] Feb 23 11:19:36 crc kubenswrapper[4904]: I0223 11:19:36.455314 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t4v62" podUID="31c84a6a-92b9-463c-a4d2-d06a5f81af07" containerName="registry-server" containerID="cri-o://fcdb84239b6bb971ea3b441fe69f323535bbbc4c30fa08d5600c553b46a887fe" gracePeriod=2 Feb 23 11:19:36 crc kubenswrapper[4904]: E0223 11:19:36.552598 4904 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31c84a6a_92b9_463c_a4d2_d06a5f81af07.slice/crio-conmon-fcdb84239b6bb971ea3b441fe69f323535bbbc4c30fa08d5600c553b46a887fe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31c84a6a_92b9_463c_a4d2_d06a5f81af07.slice/crio-fcdb84239b6bb971ea3b441fe69f323535bbbc4c30fa08d5600c553b46a887fe.scope\": RecentStats: unable to find data in memory cache]" Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.039120 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4v62" Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.126255 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c84a6a-92b9-463c-a4d2-d06a5f81af07-catalog-content\") pod \"31c84a6a-92b9-463c-a4d2-d06a5f81af07\" (UID: \"31c84a6a-92b9-463c-a4d2-d06a5f81af07\") " Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.126319 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c84a6a-92b9-463c-a4d2-d06a5f81af07-utilities\") pod \"31c84a6a-92b9-463c-a4d2-d06a5f81af07\" (UID: \"31c84a6a-92b9-463c-a4d2-d06a5f81af07\") " Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.127207 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vk9x\" (UniqueName: \"kubernetes.io/projected/31c84a6a-92b9-463c-a4d2-d06a5f81af07-kube-api-access-6vk9x\") pod \"31c84a6a-92b9-463c-a4d2-d06a5f81af07\" (UID: \"31c84a6a-92b9-463c-a4d2-d06a5f81af07\") " Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.127406 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31c84a6a-92b9-463c-a4d2-d06a5f81af07-utilities" (OuterVolumeSpecName: "utilities") pod "31c84a6a-92b9-463c-a4d2-d06a5f81af07" (UID: "31c84a6a-92b9-463c-a4d2-d06a5f81af07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.128057 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c84a6a-92b9-463c-a4d2-d06a5f81af07-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.133560 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31c84a6a-92b9-463c-a4d2-d06a5f81af07-kube-api-access-6vk9x" (OuterVolumeSpecName: "kube-api-access-6vk9x") pod "31c84a6a-92b9-463c-a4d2-d06a5f81af07" (UID: "31c84a6a-92b9-463c-a4d2-d06a5f81af07"). InnerVolumeSpecName "kube-api-access-6vk9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.229874 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vk9x\" (UniqueName: \"kubernetes.io/projected/31c84a6a-92b9-463c-a4d2-d06a5f81af07-kube-api-access-6vk9x\") on node \"crc\" DevicePath \"\"" Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.271481 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31c84a6a-92b9-463c-a4d2-d06a5f81af07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31c84a6a-92b9-463c-a4d2-d06a5f81af07" (UID: "31c84a6a-92b9-463c-a4d2-d06a5f81af07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.331200 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c84a6a-92b9-463c-a4d2-d06a5f81af07-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.479158 4904 generic.go:334] "Generic (PLEG): container finished" podID="31c84a6a-92b9-463c-a4d2-d06a5f81af07" containerID="fcdb84239b6bb971ea3b441fe69f323535bbbc4c30fa08d5600c553b46a887fe" exitCode=0 Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.479264 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4v62" Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.479323 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4v62" event={"ID":"31c84a6a-92b9-463c-a4d2-d06a5f81af07","Type":"ContainerDied","Data":"fcdb84239b6bb971ea3b441fe69f323535bbbc4c30fa08d5600c553b46a887fe"} Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.480761 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4v62" event={"ID":"31c84a6a-92b9-463c-a4d2-d06a5f81af07","Type":"ContainerDied","Data":"aa52f30e46ebfbbe57583b42cbaf100d8bae569c198a5b7fc785bdef3541e720"} Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.480804 4904 scope.go:117] "RemoveContainer" containerID="fcdb84239b6bb971ea3b441fe69f323535bbbc4c30fa08d5600c553b46a887fe" Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.511895 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t4v62"] Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.521197 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t4v62"] Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.528184 4904 scope.go:117] "RemoveContainer" containerID="ec5481cd4a9601d650d197cf85211bab39099bda7d64739db4249da6f046a536" Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.562368 4904 scope.go:117] "RemoveContainer" containerID="f75aac3b0b664853d5e5af74fc03ae569d91546d1049a2f7e731bc885e3c70d6" Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.613014 4904 scope.go:117] "RemoveContainer" containerID="fcdb84239b6bb971ea3b441fe69f323535bbbc4c30fa08d5600c553b46a887fe" Feb 23 11:19:37 crc kubenswrapper[4904]: E0223 11:19:37.613377 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcdb84239b6bb971ea3b441fe69f323535bbbc4c30fa08d5600c553b46a887fe\": container with ID starting with fcdb84239b6bb971ea3b441fe69f323535bbbc4c30fa08d5600c553b46a887fe not found: ID does not exist" containerID="fcdb84239b6bb971ea3b441fe69f323535bbbc4c30fa08d5600c553b46a887fe" Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.613413 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcdb84239b6bb971ea3b441fe69f323535bbbc4c30fa08d5600c553b46a887fe"} err="failed to get container status \"fcdb84239b6bb971ea3b441fe69f323535bbbc4c30fa08d5600c553b46a887fe\": rpc error: code = NotFound desc = could not find container \"fcdb84239b6bb971ea3b441fe69f323535bbbc4c30fa08d5600c553b46a887fe\": container with ID starting with fcdb84239b6bb971ea3b441fe69f323535bbbc4c30fa08d5600c553b46a887fe not found: ID does not exist" Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.613438 4904 scope.go:117] "RemoveContainer" containerID="ec5481cd4a9601d650d197cf85211bab39099bda7d64739db4249da6f046a536" Feb 23 11:19:37 crc kubenswrapper[4904]: E0223 11:19:37.613666 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec5481cd4a9601d650d197cf85211bab39099bda7d64739db4249da6f046a536\": container with ID starting with ec5481cd4a9601d650d197cf85211bab39099bda7d64739db4249da6f046a536 not found: ID does not exist" containerID="ec5481cd4a9601d650d197cf85211bab39099bda7d64739db4249da6f046a536" Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.613698 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec5481cd4a9601d650d197cf85211bab39099bda7d64739db4249da6f046a536"} err="failed to get container status \"ec5481cd4a9601d650d197cf85211bab39099bda7d64739db4249da6f046a536\": rpc error: code = NotFound desc = could not find container \"ec5481cd4a9601d650d197cf85211bab39099bda7d64739db4249da6f046a536\": container with ID starting with ec5481cd4a9601d650d197cf85211bab39099bda7d64739db4249da6f046a536 not found: ID does not exist" Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.613749 4904 scope.go:117] "RemoveContainer" containerID="f75aac3b0b664853d5e5af74fc03ae569d91546d1049a2f7e731bc885e3c70d6" Feb 23 11:19:37 crc kubenswrapper[4904]: E0223 11:19:37.613955 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f75aac3b0b664853d5e5af74fc03ae569d91546d1049a2f7e731bc885e3c70d6\": container with ID starting with f75aac3b0b664853d5e5af74fc03ae569d91546d1049a2f7e731bc885e3c70d6 not found: ID does not exist" containerID="f75aac3b0b664853d5e5af74fc03ae569d91546d1049a2f7e731bc885e3c70d6" Feb 23 11:19:37 crc kubenswrapper[4904]: I0223 11:19:37.613985 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f75aac3b0b664853d5e5af74fc03ae569d91546d1049a2f7e731bc885e3c70d6"} err="failed to get container status \"f75aac3b0b664853d5e5af74fc03ae569d91546d1049a2f7e731bc885e3c70d6\": rpc error: code = NotFound desc = could not find container \"f75aac3b0b664853d5e5af74fc03ae569d91546d1049a2f7e731bc885e3c70d6\": container with ID starting with f75aac3b0b664853d5e5af74fc03ae569d91546d1049a2f7e731bc885e3c70d6 not found: ID does not exist" Feb 23 11:19:39 crc kubenswrapper[4904]: I0223 11:19:39.276049 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31c84a6a-92b9-463c-a4d2-d06a5f81af07" path="/var/lib/kubelet/pods/31c84a6a-92b9-463c-a4d2-d06a5f81af07/volumes" Feb 23 11:19:46 crc kubenswrapper[4904]: I0223 11:19:46.719601 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="404e5fa5-dbcb-4e7e-ad52-96f65cb16015" containerName="galera" probeResult="failure" output="command timed out" Feb 23 11:19:47 crc kubenswrapper[4904]: I0223 11:19:47.265405 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:19:47 crc kubenswrapper[4904]: E0223 11:19:47.266069 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:20:00 crc kubenswrapper[4904]: I0223 11:20:00.256368 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:20:00 crc kubenswrapper[4904]: E0223 11:20:00.258294 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:20:11 crc kubenswrapper[4904]: I0223 11:20:11.256412 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:20:11 crc kubenswrapper[4904]: E0223 11:20:11.257213 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:20:24 crc kubenswrapper[4904]: I0223 11:20:24.256408 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:20:24 crc kubenswrapper[4904]: E0223 11:20:24.258750 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:20:39 crc kubenswrapper[4904]: I0223 11:20:39.256382 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:20:39 crc kubenswrapper[4904]: E0223 11:20:39.257655 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:20:50 crc kubenswrapper[4904]: I0223 11:20:50.255298 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:20:50 crc kubenswrapper[4904]: E0223 11:20:50.255979 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:21:01 crc kubenswrapper[4904]: I0223 11:21:01.255692 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:21:01 crc kubenswrapper[4904]: E0223 11:21:01.256849 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:21:05 crc kubenswrapper[4904]: I0223 11:21:05.452395 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bb5vx"] Feb 23 11:21:05 crc kubenswrapper[4904]: E0223 11:21:05.453184 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c84a6a-92b9-463c-a4d2-d06a5f81af07" containerName="extract-utilities" Feb 23 11:21:05 crc kubenswrapper[4904]: I0223 11:21:05.453197 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c84a6a-92b9-463c-a4d2-d06a5f81af07" containerName="extract-utilities" Feb 23 11:21:05 crc kubenswrapper[4904]: E0223 11:21:05.453231 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c84a6a-92b9-463c-a4d2-d06a5f81af07" containerName="extract-content" Feb 23 11:21:05 crc kubenswrapper[4904]: I0223 11:21:05.453237 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c84a6a-92b9-463c-a4d2-d06a5f81af07" containerName="extract-content" Feb 23 11:21:05 crc kubenswrapper[4904]: E0223 11:21:05.453252 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c84a6a-92b9-463c-a4d2-d06a5f81af07" containerName="registry-server" Feb 23 11:21:05 crc kubenswrapper[4904]: I0223 11:21:05.453259 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c84a6a-92b9-463c-a4d2-d06a5f81af07" containerName="registry-server" Feb 23 11:21:05 crc kubenswrapper[4904]: I0223 11:21:05.453435 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="31c84a6a-92b9-463c-a4d2-d06a5f81af07" containerName="registry-server" Feb 23 11:21:05 crc kubenswrapper[4904]: I0223 11:21:05.454804 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bb5vx" Feb 23 11:21:05 crc kubenswrapper[4904]: I0223 11:21:05.466106 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bb5vx"] Feb 23 11:21:05 crc kubenswrapper[4904]: I0223 11:21:05.573525 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8285c0c1-d975-43ba-b54f-88db2e30ad61-utilities\") pod \"redhat-marketplace-bb5vx\" (UID: \"8285c0c1-d975-43ba-b54f-88db2e30ad61\") " pod="openshift-marketplace/redhat-marketplace-bb5vx" Feb 23 11:21:05 crc kubenswrapper[4904]: I0223 11:21:05.573618 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlw56\" (UniqueName: \"kubernetes.io/projected/8285c0c1-d975-43ba-b54f-88db2e30ad61-kube-api-access-wlw56\") pod \"redhat-marketplace-bb5vx\" (UID: \"8285c0c1-d975-43ba-b54f-88db2e30ad61\") " pod="openshift-marketplace/redhat-marketplace-bb5vx" Feb 23 11:21:05 crc kubenswrapper[4904]: I0223 11:21:05.573680 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8285c0c1-d975-43ba-b54f-88db2e30ad61-catalog-content\") pod \"redhat-marketplace-bb5vx\" (UID: \"8285c0c1-d975-43ba-b54f-88db2e30ad61\") " pod="openshift-marketplace/redhat-marketplace-bb5vx" Feb 23 11:21:05 crc kubenswrapper[4904]: I0223 11:21:05.675610 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8285c0c1-d975-43ba-b54f-88db2e30ad61-utilities\") pod \"redhat-marketplace-bb5vx\" (UID: \"8285c0c1-d975-43ba-b54f-88db2e30ad61\") " pod="openshift-marketplace/redhat-marketplace-bb5vx" Feb 23 11:21:05 crc kubenswrapper[4904]: I0223 11:21:05.675681 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlw56\" (UniqueName: \"kubernetes.io/projected/8285c0c1-d975-43ba-b54f-88db2e30ad61-kube-api-access-wlw56\") pod \"redhat-marketplace-bb5vx\" (UID: \"8285c0c1-d975-43ba-b54f-88db2e30ad61\") " pod="openshift-marketplace/redhat-marketplace-bb5vx" Feb 23 11:21:05 crc kubenswrapper[4904]: I0223 11:21:05.675761 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8285c0c1-d975-43ba-b54f-88db2e30ad61-catalog-content\") pod \"redhat-marketplace-bb5vx\" (UID: \"8285c0c1-d975-43ba-b54f-88db2e30ad61\") " pod="openshift-marketplace/redhat-marketplace-bb5vx" Feb 23 11:21:05 crc kubenswrapper[4904]: I0223 11:21:05.676273 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8285c0c1-d975-43ba-b54f-88db2e30ad61-utilities\") pod \"redhat-marketplace-bb5vx\" (UID: \"8285c0c1-d975-43ba-b54f-88db2e30ad61\") " pod="openshift-marketplace/redhat-marketplace-bb5vx" Feb 23 11:21:05 crc kubenswrapper[4904]: I0223 11:21:05.676316 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8285c0c1-d975-43ba-b54f-88db2e30ad61-catalog-content\") pod \"redhat-marketplace-bb5vx\" (UID: \"8285c0c1-d975-43ba-b54f-88db2e30ad61\") " pod="openshift-marketplace/redhat-marketplace-bb5vx" Feb 23 11:21:05 crc kubenswrapper[4904]: I0223 11:21:05.700165 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlw56\" (UniqueName: \"kubernetes.io/projected/8285c0c1-d975-43ba-b54f-88db2e30ad61-kube-api-access-wlw56\") pod \"redhat-marketplace-bb5vx\" (UID: \"8285c0c1-d975-43ba-b54f-88db2e30ad61\") " pod="openshift-marketplace/redhat-marketplace-bb5vx" Feb 23 11:21:05 crc kubenswrapper[4904]: I0223 11:21:05.776212 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bb5vx" Feb 23 11:21:06 crc kubenswrapper[4904]: I0223 11:21:06.318642 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bb5vx"] Feb 23 11:21:06 crc kubenswrapper[4904]: I0223 11:21:06.456151 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb5vx" event={"ID":"8285c0c1-d975-43ba-b54f-88db2e30ad61","Type":"ContainerStarted","Data":"51fad180c342af3fb258b63e20f6636cb84c958958d25f4ebbd72f20d5721e6b"} Feb 23 11:21:07 crc kubenswrapper[4904]: I0223 11:21:07.465106 4904 generic.go:334] "Generic (PLEG): container finished" podID="8285c0c1-d975-43ba-b54f-88db2e30ad61" containerID="521f779be317a10eebd4283525951bc2be7a39fa09a49d0f564db13efbd146b6" exitCode=0 Feb 23 11:21:07 crc kubenswrapper[4904]: I0223 11:21:07.465158 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb5vx" event={"ID":"8285c0c1-d975-43ba-b54f-88db2e30ad61","Type":"ContainerDied","Data":"521f779be317a10eebd4283525951bc2be7a39fa09a49d0f564db13efbd146b6"} Feb 23 11:21:08 crc kubenswrapper[4904]: I0223 11:21:08.475767 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb5vx" event={"ID":"8285c0c1-d975-43ba-b54f-88db2e30ad61","Type":"ContainerStarted","Data":"a4d2654abd3af0c102063920f2988a03113a1e6208c6cc7c7d883ae2e8cb19c9"} Feb 23 11:21:08 crc kubenswrapper[4904]: W0223 11:21:08.896681 4904 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8285c0c1_d975_43ba_b54f_88db2e30ad61.slice/crio-conmon-a4d2654abd3af0c102063920f2988a03113a1e6208c6cc7c7d883ae2e8cb19c9.scope/memory.swap.max": read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8285c0c1_d975_43ba_b54f_88db2e30ad61.slice/crio-conmon-a4d2654abd3af0c102063920f2988a03113a1e6208c6cc7c7d883ae2e8cb19c9.scope/memory.swap.max: no such device Feb 23 11:21:09 crc kubenswrapper[4904]: E0223 11:21:09.035246 4904 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8285c0c1_d975_43ba_b54f_88db2e30ad61.slice/crio-a4d2654abd3af0c102063920f2988a03113a1e6208c6cc7c7d883ae2e8cb19c9.scope\": RecentStats: unable to find data in memory cache]" Feb 23 11:21:09 crc kubenswrapper[4904]: I0223 11:21:09.486308 4904 generic.go:334] "Generic (PLEG): container finished" podID="8285c0c1-d975-43ba-b54f-88db2e30ad61" containerID="a4d2654abd3af0c102063920f2988a03113a1e6208c6cc7c7d883ae2e8cb19c9" exitCode=0 Feb 23 11:21:09 crc kubenswrapper[4904]: I0223 11:21:09.486380 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb5vx" event={"ID":"8285c0c1-d975-43ba-b54f-88db2e30ad61","Type":"ContainerDied","Data":"a4d2654abd3af0c102063920f2988a03113a1e6208c6cc7c7d883ae2e8cb19c9"} Feb 23 11:21:10 crc kubenswrapper[4904]: I0223 11:21:10.499551 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb5vx" event={"ID":"8285c0c1-d975-43ba-b54f-88db2e30ad61","Type":"ContainerStarted","Data":"b59a562fb34832ac79d087acdb76be281d52d8513a57fc510f724daf0f9e89d1"} Feb 23 11:21:10 crc kubenswrapper[4904]: I0223 11:21:10.537798 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bb5vx" podStartSLOduration=3.141273153 podStartE2EDuration="5.53777815s" podCreationTimestamp="2026-02-23 11:21:05 +0000 UTC" firstStartedPulling="2026-02-23 11:21:07.466860894 +0000 UTC m=+4500.887234407" lastFinishedPulling="2026-02-23 11:21:09.863365891 +0000 UTC m=+4503.283739404" observedRunningTime="2026-02-23 11:21:10.526468228 +0000 UTC m=+4503.946841741" watchObservedRunningTime="2026-02-23 11:21:10.53777815 +0000 UTC m=+4503.958151663" Feb 23 11:21:12 crc kubenswrapper[4904]: I0223 11:21:12.256065 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:21:12 crc kubenswrapper[4904]: E0223 11:21:12.256796 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:21:15 crc kubenswrapper[4904]: I0223 11:21:15.776599 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bb5vx" Feb 23 11:21:15 crc kubenswrapper[4904]: I0223 11:21:15.777322 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bb5vx" Feb 23 11:21:15 crc kubenswrapper[4904]: I0223 11:21:15.854365 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bb5vx" Feb 23 11:21:16 crc kubenswrapper[4904]: I0223 11:21:16.636631 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bb5vx" Feb 23 11:21:16 crc kubenswrapper[4904]: I0223 11:21:16.689605 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bb5vx"] Feb 23 11:21:18 crc kubenswrapper[4904]: I0223 11:21:18.594552 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bb5vx" podUID="8285c0c1-d975-43ba-b54f-88db2e30ad61" containerName="registry-server" containerID="cri-o://b59a562fb34832ac79d087acdb76be281d52d8513a57fc510f724daf0f9e89d1" gracePeriod=2 Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.073698 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bb5vx" Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.164053 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlw56\" (UniqueName: \"kubernetes.io/projected/8285c0c1-d975-43ba-b54f-88db2e30ad61-kube-api-access-wlw56\") pod \"8285c0c1-d975-43ba-b54f-88db2e30ad61\" (UID: \"8285c0c1-d975-43ba-b54f-88db2e30ad61\") " Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.164176 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8285c0c1-d975-43ba-b54f-88db2e30ad61-utilities\") pod \"8285c0c1-d975-43ba-b54f-88db2e30ad61\" (UID: \"8285c0c1-d975-43ba-b54f-88db2e30ad61\") " Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.164214 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8285c0c1-d975-43ba-b54f-88db2e30ad61-catalog-content\") pod \"8285c0c1-d975-43ba-b54f-88db2e30ad61\" (UID: \"8285c0c1-d975-43ba-b54f-88db2e30ad61\") " Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.171560 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8285c0c1-d975-43ba-b54f-88db2e30ad61-utilities" (OuterVolumeSpecName: "utilities") pod "8285c0c1-d975-43ba-b54f-88db2e30ad61" (UID: "8285c0c1-d975-43ba-b54f-88db2e30ad61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.186622 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8285c0c1-d975-43ba-b54f-88db2e30ad61-kube-api-access-wlw56" (OuterVolumeSpecName: "kube-api-access-wlw56") pod "8285c0c1-d975-43ba-b54f-88db2e30ad61" (UID: "8285c0c1-d975-43ba-b54f-88db2e30ad61"). InnerVolumeSpecName "kube-api-access-wlw56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.205682 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8285c0c1-d975-43ba-b54f-88db2e30ad61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8285c0c1-d975-43ba-b54f-88db2e30ad61" (UID: "8285c0c1-d975-43ba-b54f-88db2e30ad61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.271005 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlw56\" (UniqueName: \"kubernetes.io/projected/8285c0c1-d975-43ba-b54f-88db2e30ad61-kube-api-access-wlw56\") on node \"crc\" DevicePath \"\"" Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.271247 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8285c0c1-d975-43ba-b54f-88db2e30ad61-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.271259 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8285c0c1-d975-43ba-b54f-88db2e30ad61-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.609782 4904 generic.go:334] "Generic (PLEG): container finished" podID="8285c0c1-d975-43ba-b54f-88db2e30ad61" containerID="b59a562fb34832ac79d087acdb76be281d52d8513a57fc510f724daf0f9e89d1" exitCode=0 Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.609849 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb5vx" event={"ID":"8285c0c1-d975-43ba-b54f-88db2e30ad61","Type":"ContainerDied","Data":"b59a562fb34832ac79d087acdb76be281d52d8513a57fc510f724daf0f9e89d1"} Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.609929 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb5vx" event={"ID":"8285c0c1-d975-43ba-b54f-88db2e30ad61","Type":"ContainerDied","Data":"51fad180c342af3fb258b63e20f6636cb84c958958d25f4ebbd72f20d5721e6b"} Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.609970 4904 scope.go:117] "RemoveContainer" containerID="b59a562fb34832ac79d087acdb76be281d52d8513a57fc510f724daf0f9e89d1" Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.611890 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bb5vx" Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.638262 4904 scope.go:117] "RemoveContainer" containerID="a4d2654abd3af0c102063920f2988a03113a1e6208c6cc7c7d883ae2e8cb19c9" Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.656023 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bb5vx"] Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.673048 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bb5vx"] Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.675584 4904 scope.go:117] "RemoveContainer" containerID="521f779be317a10eebd4283525951bc2be7a39fa09a49d0f564db13efbd146b6" Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.751905 4904 scope.go:117] "RemoveContainer" containerID="b59a562fb34832ac79d087acdb76be281d52d8513a57fc510f724daf0f9e89d1" Feb 23 11:21:19 crc kubenswrapper[4904]: E0223 11:21:19.752507 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b59a562fb34832ac79d087acdb76be281d52d8513a57fc510f724daf0f9e89d1\": container with ID starting with b59a562fb34832ac79d087acdb76be281d52d8513a57fc510f724daf0f9e89d1 not found: ID does not exist" containerID="b59a562fb34832ac79d087acdb76be281d52d8513a57fc510f724daf0f9e89d1" Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.752580 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b59a562fb34832ac79d087acdb76be281d52d8513a57fc510f724daf0f9e89d1"} err="failed to get container status \"b59a562fb34832ac79d087acdb76be281d52d8513a57fc510f724daf0f9e89d1\": rpc error: code = NotFound desc = could not find container \"b59a562fb34832ac79d087acdb76be281d52d8513a57fc510f724daf0f9e89d1\": container with ID starting with b59a562fb34832ac79d087acdb76be281d52d8513a57fc510f724daf0f9e89d1 not found: ID does not exist" Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.752629 4904 scope.go:117] "RemoveContainer" containerID="a4d2654abd3af0c102063920f2988a03113a1e6208c6cc7c7d883ae2e8cb19c9" Feb 23 11:21:19 crc kubenswrapper[4904]: E0223 11:21:19.753074 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4d2654abd3af0c102063920f2988a03113a1e6208c6cc7c7d883ae2e8cb19c9\": container with ID starting with a4d2654abd3af0c102063920f2988a03113a1e6208c6cc7c7d883ae2e8cb19c9 not found: ID does not exist" containerID="a4d2654abd3af0c102063920f2988a03113a1e6208c6cc7c7d883ae2e8cb19c9" Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.753116 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4d2654abd3af0c102063920f2988a03113a1e6208c6cc7c7d883ae2e8cb19c9"} err="failed to get container status \"a4d2654abd3af0c102063920f2988a03113a1e6208c6cc7c7d883ae2e8cb19c9\": rpc error: code = NotFound desc = could not find container \"a4d2654abd3af0c102063920f2988a03113a1e6208c6cc7c7d883ae2e8cb19c9\": container with ID starting with a4d2654abd3af0c102063920f2988a03113a1e6208c6cc7c7d883ae2e8cb19c9 not found: ID does not exist" Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.753145 4904 scope.go:117] "RemoveContainer" containerID="521f779be317a10eebd4283525951bc2be7a39fa09a49d0f564db13efbd146b6" Feb 23 11:21:19 crc kubenswrapper[4904]: E0223 11:21:19.753519 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"521f779be317a10eebd4283525951bc2be7a39fa09a49d0f564db13efbd146b6\": container with ID starting with 521f779be317a10eebd4283525951bc2be7a39fa09a49d0f564db13efbd146b6 not found: ID does not exist" containerID="521f779be317a10eebd4283525951bc2be7a39fa09a49d0f564db13efbd146b6" Feb 23 11:21:19 crc kubenswrapper[4904]: I0223 11:21:19.753545 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"521f779be317a10eebd4283525951bc2be7a39fa09a49d0f564db13efbd146b6"} err="failed to get container status \"521f779be317a10eebd4283525951bc2be7a39fa09a49d0f564db13efbd146b6\": rpc error: code = NotFound desc = could not find container \"521f779be317a10eebd4283525951bc2be7a39fa09a49d0f564db13efbd146b6\": container with ID starting with 521f779be317a10eebd4283525951bc2be7a39fa09a49d0f564db13efbd146b6 not found: ID does not exist" Feb 23 11:21:21 crc kubenswrapper[4904]: I0223 11:21:21.270140 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8285c0c1-d975-43ba-b54f-88db2e30ad61" path="/var/lib/kubelet/pods/8285c0c1-d975-43ba-b54f-88db2e30ad61/volumes" Feb 23 11:21:23 crc kubenswrapper[4904]: I0223 11:21:23.255537 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:21:23 crc kubenswrapper[4904]: E0223 11:21:23.256262 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:21:38 crc kubenswrapper[4904]: I0223 11:21:38.256016 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:21:38 crc kubenswrapper[4904]: E0223 11:21:38.256689 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:21:49 crc kubenswrapper[4904]: I0223 11:21:49.256800 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:21:49 crc kubenswrapper[4904]: E0223 11:21:49.257969 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:22:02 crc kubenswrapper[4904]: I0223 11:22:02.583599 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:22:02 crc kubenswrapper[4904]: E0223 11:22:02.594342 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:22:13 crc kubenswrapper[4904]: I0223 11:22:13.256156 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:22:13 crc kubenswrapper[4904]: E0223 11:22:13.256957 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:22:28 crc kubenswrapper[4904]: I0223 11:22:28.256321 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:22:28 crc kubenswrapper[4904]: E0223 11:22:28.257416 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:22:43 crc kubenswrapper[4904]: I0223 11:22:43.256068 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:22:43 crc kubenswrapper[4904]: E0223 11:22:43.256949 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:22:56 crc kubenswrapper[4904]: I0223 11:22:56.257010 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:22:56 crc kubenswrapper[4904]: E0223 11:22:56.258397 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:23:11 crc kubenswrapper[4904]: I0223 11:23:11.255234 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:23:11 crc kubenswrapper[4904]: E0223 11:23:11.256007 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:23:23 crc kubenswrapper[4904]: I0223 11:23:23.255815 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:23:23 crc kubenswrapper[4904]: E0223 11:23:23.256587 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:23:35 crc kubenswrapper[4904]: I0223 11:23:35.256107 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:23:35 crc kubenswrapper[4904]: E0223 11:23:35.257003 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:23:46 crc kubenswrapper[4904]: I0223 11:23:46.256510 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:23:46 crc kubenswrapper[4904]: E0223 11:23:46.257192 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:23:57 crc kubenswrapper[4904]: I0223 11:23:57.263497 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:23:58 crc kubenswrapper[4904]: I0223 11:23:58.074207 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerStarted","Data":"2cd8fbdbd2b09a00e00db2bbd257852b69be64993b2249f7cfb8bc7eb06ffc3e"} Feb 23 11:25:29 crc kubenswrapper[4904]: I0223 11:25:29.125264 4904 generic.go:334] "Generic (PLEG): container finished" podID="463cdfc8-f595-4253-8fcf-4da5d843fcf8" containerID="1acde66d21a46184c17918bca62648acb0a148bc9c736d993fe51828779742f2" exitCode=1 Feb 23 11:25:29 crc kubenswrapper[4904]: I0223 11:25:29.125530 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"463cdfc8-f595-4253-8fcf-4da5d843fcf8","Type":"ContainerDied","Data":"1acde66d21a46184c17918bca62648acb0a148bc9c736d993fe51828779742f2"} Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.641046 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.803571 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/463cdfc8-f595-4253-8fcf-4da5d843fcf8-openstack-config-secret\") pod \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.804160 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/463cdfc8-f595-4253-8fcf-4da5d843fcf8-test-operator-ephemeral-temporary\") pod \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.804844 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/463cdfc8-f595-4253-8fcf-4da5d843fcf8-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "463cdfc8-f595-4253-8fcf-4da5d843fcf8" (UID: "463cdfc8-f595-4253-8fcf-4da5d843fcf8"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.805264 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x4kc\" (UniqueName: \"kubernetes.io/projected/463cdfc8-f595-4253-8fcf-4da5d843fcf8-kube-api-access-9x4kc\") pod \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.805399 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/463cdfc8-f595-4253-8fcf-4da5d843fcf8-ca-certs\") pod \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.805491 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.805555 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/463cdfc8-f595-4253-8fcf-4da5d843fcf8-openstack-config\") pod \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.805655 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/463cdfc8-f595-4253-8fcf-4da5d843fcf8-config-data\") pod \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.805772 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/463cdfc8-f595-4253-8fcf-4da5d843fcf8-ssh-key\") pod \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.805896 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/463cdfc8-f595-4253-8fcf-4da5d843fcf8-test-operator-ephemeral-workdir\") pod \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\" (UID: \"463cdfc8-f595-4253-8fcf-4da5d843fcf8\") " Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.806547 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/463cdfc8-f595-4253-8fcf-4da5d843fcf8-config-data" (OuterVolumeSpecName: "config-data") pod "463cdfc8-f595-4253-8fcf-4da5d843fcf8" (UID: "463cdfc8-f595-4253-8fcf-4da5d843fcf8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.806990 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/463cdfc8-f595-4253-8fcf-4da5d843fcf8-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.807087 4904 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/463cdfc8-f595-4253-8fcf-4da5d843fcf8-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.828057 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/463cdfc8-f595-4253-8fcf-4da5d843fcf8-kube-api-access-9x4kc" (OuterVolumeSpecName: "kube-api-access-9x4kc") pod "463cdfc8-f595-4253-8fcf-4da5d843fcf8" (UID: "463cdfc8-f595-4253-8fcf-4da5d843fcf8"). InnerVolumeSpecName "kube-api-access-9x4kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.832110 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "463cdfc8-f595-4253-8fcf-4da5d843fcf8" (UID: "463cdfc8-f595-4253-8fcf-4da5d843fcf8"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.834999 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/463cdfc8-f595-4253-8fcf-4da5d843fcf8-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "463cdfc8-f595-4253-8fcf-4da5d843fcf8" (UID: "463cdfc8-f595-4253-8fcf-4da5d843fcf8"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.835874 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/463cdfc8-f595-4253-8fcf-4da5d843fcf8-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "463cdfc8-f595-4253-8fcf-4da5d843fcf8" (UID: "463cdfc8-f595-4253-8fcf-4da5d843fcf8"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.856912 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/463cdfc8-f595-4253-8fcf-4da5d843fcf8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "463cdfc8-f595-4253-8fcf-4da5d843fcf8" (UID: "463cdfc8-f595-4253-8fcf-4da5d843fcf8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.877753 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/463cdfc8-f595-4253-8fcf-4da5d843fcf8-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "463cdfc8-f595-4253-8fcf-4da5d843fcf8" (UID: "463cdfc8-f595-4253-8fcf-4da5d843fcf8"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.909276 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/463cdfc8-f595-4253-8fcf-4da5d843fcf8-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "463cdfc8-f595-4253-8fcf-4da5d843fcf8" (UID: "463cdfc8-f595-4253-8fcf-4da5d843fcf8"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.909405 4904 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/463cdfc8-f595-4253-8fcf-4da5d843fcf8-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.909451 4904 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.909466 4904 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/463cdfc8-f595-4253-8fcf-4da5d843fcf8-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.909478 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/463cdfc8-f595-4253-8fcf-4da5d843fcf8-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.909492 4904 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/463cdfc8-f595-4253-8fcf-4da5d843fcf8-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.909507 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x4kc\" (UniqueName: \"kubernetes.io/projected/463cdfc8-f595-4253-8fcf-4da5d843fcf8-kube-api-access-9x4kc\") on node \"crc\" DevicePath \"\"" Feb 23 11:25:30 crc kubenswrapper[4904]: I0223 11:25:30.934037 4904 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 23 11:25:31 crc kubenswrapper[4904]: I0223 11:25:31.011505 4904 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 23 11:25:31 crc kubenswrapper[4904]: I0223 11:25:31.011540 4904 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/463cdfc8-f595-4253-8fcf-4da5d843fcf8-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 23 11:25:31 crc kubenswrapper[4904]: I0223 11:25:31.154171 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"463cdfc8-f595-4253-8fcf-4da5d843fcf8","Type":"ContainerDied","Data":"3c6fdb7861c9ec2faae9fac7f244dcec3e6ad5df705b215c4307c716e60cb914"} Feb 23 11:25:31 crc kubenswrapper[4904]: I0223 11:25:31.154214 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c6fdb7861c9ec2faae9fac7f244dcec3e6ad5df705b215c4307c716e60cb914" Feb 23 11:25:31 crc kubenswrapper[4904]: I0223 11:25:31.154290 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 23 11:25:33 crc kubenswrapper[4904]: I0223 11:25:33.981110 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 23 11:25:33 crc kubenswrapper[4904]: E0223 11:25:33.982142 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8285c0c1-d975-43ba-b54f-88db2e30ad61" containerName="extract-content" Feb 23 11:25:33 crc kubenswrapper[4904]: I0223 11:25:33.982158 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="8285c0c1-d975-43ba-b54f-88db2e30ad61" containerName="extract-content" Feb 23 11:25:33 crc kubenswrapper[4904]: E0223 11:25:33.982173 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="463cdfc8-f595-4253-8fcf-4da5d843fcf8" containerName="tempest-tests-tempest-tests-runner" Feb 23 11:25:33 crc kubenswrapper[4904]: I0223 11:25:33.982179 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="463cdfc8-f595-4253-8fcf-4da5d843fcf8" containerName="tempest-tests-tempest-tests-runner" Feb 23 11:25:33 crc kubenswrapper[4904]: E0223 11:25:33.982210 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8285c0c1-d975-43ba-b54f-88db2e30ad61" containerName="extract-utilities" Feb 23 11:25:33 crc kubenswrapper[4904]: I0223 11:25:33.982221 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="8285c0c1-d975-43ba-b54f-88db2e30ad61" containerName="extract-utilities" Feb 23 11:25:33 crc kubenswrapper[4904]: E0223 11:25:33.982250 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8285c0c1-d975-43ba-b54f-88db2e30ad61" containerName="registry-server" Feb 23 11:25:33 crc kubenswrapper[4904]: I0223 11:25:33.982256 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="8285c0c1-d975-43ba-b54f-88db2e30ad61" containerName="registry-server" Feb 23 11:25:33 crc kubenswrapper[4904]: I0223 11:25:33.982441 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="8285c0c1-d975-43ba-b54f-88db2e30ad61" containerName="registry-server" Feb 23 11:25:33 crc kubenswrapper[4904]: I0223 11:25:33.982457 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="463cdfc8-f595-4253-8fcf-4da5d843fcf8" containerName="tempest-tests-tempest-tests-runner" Feb 23 11:25:33 crc kubenswrapper[4904]: I0223 11:25:33.983244 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 11:25:33 crc kubenswrapper[4904]: I0223 11:25:33.986081 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-v878l" Feb 23 11:25:34 crc kubenswrapper[4904]: I0223 11:25:34.006354 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 23 11:25:34 crc kubenswrapper[4904]: I0223 11:25:34.078525 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lmc4\" (UniqueName: \"kubernetes.io/projected/2afac476-a697-44bb-8c89-8f727c74b150-kube-api-access-4lmc4\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2afac476-a697-44bb-8c89-8f727c74b150\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 11:25:34 crc kubenswrapper[4904]: I0223 11:25:34.078633 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2afac476-a697-44bb-8c89-8f727c74b150\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 11:25:34 crc kubenswrapper[4904]: I0223 11:25:34.180544 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lmc4\" (UniqueName: \"kubernetes.io/projected/2afac476-a697-44bb-8c89-8f727c74b150-kube-api-access-4lmc4\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2afac476-a697-44bb-8c89-8f727c74b150\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 11:25:34 crc kubenswrapper[4904]: I0223 11:25:34.180653 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2afac476-a697-44bb-8c89-8f727c74b150\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 11:25:34 crc kubenswrapper[4904]: I0223 11:25:34.181141 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2afac476-a697-44bb-8c89-8f727c74b150\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 11:25:34 crc kubenswrapper[4904]: I0223 11:25:34.206451 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lmc4\" (UniqueName: \"kubernetes.io/projected/2afac476-a697-44bb-8c89-8f727c74b150-kube-api-access-4lmc4\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2afac476-a697-44bb-8c89-8f727c74b150\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 11:25:34 crc kubenswrapper[4904]: I0223 11:25:34.244892 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2afac476-a697-44bb-8c89-8f727c74b150\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 11:25:34 crc kubenswrapper[4904]: I0223 11:25:34.322129 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 11:25:34 crc kubenswrapper[4904]: I0223 11:25:34.821689 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 23 11:25:34 crc kubenswrapper[4904]: I0223 11:25:34.826845 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 11:25:35 crc kubenswrapper[4904]: I0223 11:25:35.199298 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"2afac476-a697-44bb-8c89-8f727c74b150","Type":"ContainerStarted","Data":"506207df647c9b6767acf2f1e2736679212be96dcca43f4f39d770467d452eaf"} Feb 23 11:25:36 crc kubenswrapper[4904]: I0223 11:25:36.216009 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"2afac476-a697-44bb-8c89-8f727c74b150","Type":"ContainerStarted","Data":"fff9bc4b07ab84346636f5db82404d1634856a9a925cc6a3b48220650a219da4"} Feb 23 11:25:36 crc kubenswrapper[4904]: I0223 11:25:36.248597 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.333774052 podStartE2EDuration="3.24857448s" podCreationTimestamp="2026-02-23 11:25:33 +0000 UTC" firstStartedPulling="2026-02-23 11:25:34.826432666 +0000 UTC m=+4768.246806219" lastFinishedPulling="2026-02-23 11:25:35.741233134 +0000 UTC m=+4769.161606647" observedRunningTime="2026-02-23 11:25:36.235806617 +0000 UTC m=+4769.656180160" watchObservedRunningTime="2026-02-23 11:25:36.24857448 +0000 UTC m=+4769.668948003" Feb 23 11:26:12 crc kubenswrapper[4904]: I0223 11:26:12.280960 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-thlbh/must-gather-mh759"] Feb 23 11:26:12 crc kubenswrapper[4904]: I0223 11:26:12.283135 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thlbh/must-gather-mh759" Feb 23 11:26:12 crc kubenswrapper[4904]: I0223 11:26:12.288260 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-thlbh"/"kube-root-ca.crt" Feb 23 11:26:12 crc kubenswrapper[4904]: I0223 11:26:12.288469 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-thlbh"/"default-dockercfg-n2n5z" Feb 23 11:26:12 crc kubenswrapper[4904]: I0223 11:26:12.289177 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-thlbh"/"openshift-service-ca.crt" Feb 23 11:26:12 crc kubenswrapper[4904]: I0223 11:26:12.298815 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-thlbh/must-gather-mh759"] Feb 23 11:26:12 crc kubenswrapper[4904]: I0223 11:26:12.326832 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/15025a99-7d04-43e3-8319-51f4a350663f-must-gather-output\") pod \"must-gather-mh759\" (UID: \"15025a99-7d04-43e3-8319-51f4a350663f\") " pod="openshift-must-gather-thlbh/must-gather-mh759" Feb 23 11:26:12 crc kubenswrapper[4904]: I0223 11:26:12.326874 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2nw5\" (UniqueName: \"kubernetes.io/projected/15025a99-7d04-43e3-8319-51f4a350663f-kube-api-access-m2nw5\") pod \"must-gather-mh759\" (UID: \"15025a99-7d04-43e3-8319-51f4a350663f\") " pod="openshift-must-gather-thlbh/must-gather-mh759" Feb 23 11:26:12 crc kubenswrapper[4904]: I0223 11:26:12.428974 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/15025a99-7d04-43e3-8319-51f4a350663f-must-gather-output\") pod \"must-gather-mh759\" (UID: \"15025a99-7d04-43e3-8319-51f4a350663f\") " pod="openshift-must-gather-thlbh/must-gather-mh759" Feb 23 11:26:12 crc kubenswrapper[4904]: I0223 11:26:12.429014 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2nw5\" (UniqueName: \"kubernetes.io/projected/15025a99-7d04-43e3-8319-51f4a350663f-kube-api-access-m2nw5\") pod \"must-gather-mh759\" (UID: \"15025a99-7d04-43e3-8319-51f4a350663f\") " pod="openshift-must-gather-thlbh/must-gather-mh759" Feb 23 11:26:12 crc kubenswrapper[4904]: I0223 11:26:12.429788 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/15025a99-7d04-43e3-8319-51f4a350663f-must-gather-output\") pod \"must-gather-mh759\" (UID: \"15025a99-7d04-43e3-8319-51f4a350663f\") " pod="openshift-must-gather-thlbh/must-gather-mh759" Feb 23 11:26:12 crc kubenswrapper[4904]: I0223 11:26:12.459289 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2nw5\" (UniqueName: \"kubernetes.io/projected/15025a99-7d04-43e3-8319-51f4a350663f-kube-api-access-m2nw5\") pod \"must-gather-mh759\" (UID: \"15025a99-7d04-43e3-8319-51f4a350663f\") " pod="openshift-must-gather-thlbh/must-gather-mh759" Feb 23 11:26:12 crc kubenswrapper[4904]: I0223 11:26:12.601410 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thlbh/must-gather-mh759" Feb 23 11:26:13 crc kubenswrapper[4904]: I0223 11:26:13.150670 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-thlbh/must-gather-mh759"] Feb 23 11:26:13 crc kubenswrapper[4904]: I0223 11:26:13.630826 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-thlbh/must-gather-mh759" event={"ID":"15025a99-7d04-43e3-8319-51f4a350663f","Type":"ContainerStarted","Data":"4c49d73b03d764b55f72977838e8feca1591c5cb26e068d47d47cf9360d3c425"} Feb 23 11:26:17 crc kubenswrapper[4904]: I0223 11:26:17.398000 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 11:26:17 crc kubenswrapper[4904]: I0223 11:26:17.398639 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 11:26:19 crc kubenswrapper[4904]: I0223 11:26:19.694401 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-thlbh/must-gather-mh759" event={"ID":"15025a99-7d04-43e3-8319-51f4a350663f","Type":"ContainerStarted","Data":"78819521ef7286da36659bf94bd9c82c8aef2d688350102c6883bdedfb6898e1"} Feb 23 11:26:20 crc kubenswrapper[4904]: I0223 11:26:20.709971 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-thlbh/must-gather-mh759" event={"ID":"15025a99-7d04-43e3-8319-51f4a350663f","Type":"ContainerStarted","Data":"ea69482b9f5e0b05b2545b5416e47b010e8091bb8024c1449001fca79daa79cb"} Feb 23 11:26:20 crc kubenswrapper[4904]: I0223 11:26:20.739279 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-thlbh/must-gather-mh759" podStartSLOduration=2.6372181709999998 podStartE2EDuration="8.739252508s" podCreationTimestamp="2026-02-23 11:26:12 +0000 UTC" firstStartedPulling="2026-02-23 11:26:13.155516502 +0000 UTC m=+4806.575890015" lastFinishedPulling="2026-02-23 11:26:19.257550819 +0000 UTC m=+4812.677924352" observedRunningTime="2026-02-23 11:26:20.727082512 +0000 UTC m=+4814.147456065" watchObservedRunningTime="2026-02-23 11:26:20.739252508 +0000 UTC m=+4814.159626031" Feb 23 11:26:23 crc kubenswrapper[4904]: I0223 11:26:23.023322 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-thlbh/crc-debug-495jf"] Feb 23 11:26:23 crc kubenswrapper[4904]: I0223 11:26:23.025021 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thlbh/crc-debug-495jf" Feb 23 11:26:23 crc kubenswrapper[4904]: I0223 11:26:23.080279 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwmj7\" (UniqueName: \"kubernetes.io/projected/acc27be0-dc0f-48e8-9e63-3ad228aa1d99-kube-api-access-cwmj7\") pod \"crc-debug-495jf\" (UID: \"acc27be0-dc0f-48e8-9e63-3ad228aa1d99\") " pod="openshift-must-gather-thlbh/crc-debug-495jf" Feb 23 11:26:23 crc kubenswrapper[4904]: I0223 11:26:23.080436 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/acc27be0-dc0f-48e8-9e63-3ad228aa1d99-host\") pod \"crc-debug-495jf\" (UID: \"acc27be0-dc0f-48e8-9e63-3ad228aa1d99\") " pod="openshift-must-gather-thlbh/crc-debug-495jf" Feb 23 11:26:23 crc kubenswrapper[4904]: I0223 11:26:23.182859 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwmj7\" (UniqueName: \"kubernetes.io/projected/acc27be0-dc0f-48e8-9e63-3ad228aa1d99-kube-api-access-cwmj7\") pod \"crc-debug-495jf\" (UID: \"acc27be0-dc0f-48e8-9e63-3ad228aa1d99\") " pod="openshift-must-gather-thlbh/crc-debug-495jf" Feb 23 11:26:23 crc kubenswrapper[4904]: I0223 11:26:23.182965 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/acc27be0-dc0f-48e8-9e63-3ad228aa1d99-host\") pod \"crc-debug-495jf\" (UID: \"acc27be0-dc0f-48e8-9e63-3ad228aa1d99\") " pod="openshift-must-gather-thlbh/crc-debug-495jf" Feb 23 11:26:23 crc kubenswrapper[4904]: I0223 11:26:23.183155 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/acc27be0-dc0f-48e8-9e63-3ad228aa1d99-host\") pod \"crc-debug-495jf\" (UID: \"acc27be0-dc0f-48e8-9e63-3ad228aa1d99\") " pod="openshift-must-gather-thlbh/crc-debug-495jf" Feb 23 11:26:23 crc kubenswrapper[4904]: I0223 11:26:23.202221 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwmj7\" (UniqueName: \"kubernetes.io/projected/acc27be0-dc0f-48e8-9e63-3ad228aa1d99-kube-api-access-cwmj7\") pod \"crc-debug-495jf\" (UID: \"acc27be0-dc0f-48e8-9e63-3ad228aa1d99\") " pod="openshift-must-gather-thlbh/crc-debug-495jf" Feb 23 11:26:23 crc kubenswrapper[4904]: I0223 11:26:23.342359 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thlbh/crc-debug-495jf" Feb 23 11:26:23 crc kubenswrapper[4904]: W0223 11:26:23.392958 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacc27be0_dc0f_48e8_9e63_3ad228aa1d99.slice/crio-464176717cab328fc55a8beaee08f948fa0bb46a5397993d631ffe57546dd2f9 WatchSource:0}: Error finding container 464176717cab328fc55a8beaee08f948fa0bb46a5397993d631ffe57546dd2f9: Status 404 returned error can't find the container with id 464176717cab328fc55a8beaee08f948fa0bb46a5397993d631ffe57546dd2f9 Feb 23 11:26:23 crc kubenswrapper[4904]: I0223 11:26:23.738452 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-thlbh/crc-debug-495jf" event={"ID":"acc27be0-dc0f-48e8-9e63-3ad228aa1d99","Type":"ContainerStarted","Data":"464176717cab328fc55a8beaee08f948fa0bb46a5397993d631ffe57546dd2f9"} Feb 23 11:26:34 crc kubenswrapper[4904]: I0223 11:26:34.845786 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-thlbh/crc-debug-495jf" event={"ID":"acc27be0-dc0f-48e8-9e63-3ad228aa1d99","Type":"ContainerStarted","Data":"6f07abf11f78a85a0b5f70992959bebfc369e03abd5bc799fdb704d12e9dacfe"} Feb 23 11:26:34 crc kubenswrapper[4904]: I0223 11:26:34.864837 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-thlbh/crc-debug-495jf" podStartSLOduration=1.461463324 podStartE2EDuration="11.864821686s" podCreationTimestamp="2026-02-23 11:26:23 +0000 UTC" firstStartedPulling="2026-02-23 11:26:23.395413452 +0000 UTC m=+4816.815786965" lastFinishedPulling="2026-02-23 11:26:33.798771814 +0000 UTC m=+4827.219145327" observedRunningTime="2026-02-23 11:26:34.858905418 +0000 UTC m=+4828.279278931" watchObservedRunningTime="2026-02-23 11:26:34.864821686 +0000 UTC m=+4828.285195199" Feb 23 11:26:47 crc kubenswrapper[4904]: I0223 11:26:47.397773 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 11:26:47 crc kubenswrapper[4904]: I0223 11:26:47.398309 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 11:27:17 crc kubenswrapper[4904]: I0223 11:27:17.398372 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 11:27:17 crc kubenswrapper[4904]: I0223 11:27:17.399170 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 11:27:17 crc kubenswrapper[4904]: I0223 11:27:17.399251 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 11:27:17 crc kubenswrapper[4904]: I0223 11:27:17.400371 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2cd8fbdbd2b09a00e00db2bbd257852b69be64993b2249f7cfb8bc7eb06ffc3e"} pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 11:27:17 crc kubenswrapper[4904]: I0223 11:27:17.400475 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" containerID="cri-o://2cd8fbdbd2b09a00e00db2bbd257852b69be64993b2249f7cfb8bc7eb06ffc3e" gracePeriod=600 Feb 23 11:27:18 crc kubenswrapper[4904]: I0223 11:27:18.306847 4904 generic.go:334] "Generic (PLEG): container finished" podID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerID="2cd8fbdbd2b09a00e00db2bbd257852b69be64993b2249f7cfb8bc7eb06ffc3e" exitCode=0 Feb 23 11:27:18 crc kubenswrapper[4904]: I0223 11:27:18.306915 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerDied","Data":"2cd8fbdbd2b09a00e00db2bbd257852b69be64993b2249f7cfb8bc7eb06ffc3e"} Feb 23 11:27:18 crc kubenswrapper[4904]: I0223 11:27:18.307396 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerStarted","Data":"7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6"} Feb 23 11:27:18 crc kubenswrapper[4904]: I0223 11:27:18.307419 4904 scope.go:117] "RemoveContainer" containerID="5640fcadb91fb53744b622bbbf16aaaf0a77a0be0aee370c3d93dc03773f5063" Feb 23 11:27:20 crc kubenswrapper[4904]: I0223 11:27:20.330270 4904 generic.go:334] "Generic (PLEG): container finished" podID="acc27be0-dc0f-48e8-9e63-3ad228aa1d99" containerID="6f07abf11f78a85a0b5f70992959bebfc369e03abd5bc799fdb704d12e9dacfe" exitCode=0 Feb 23 11:27:20 crc kubenswrapper[4904]: I0223 11:27:20.330535 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-thlbh/crc-debug-495jf" event={"ID":"acc27be0-dc0f-48e8-9e63-3ad228aa1d99","Type":"ContainerDied","Data":"6f07abf11f78a85a0b5f70992959bebfc369e03abd5bc799fdb704d12e9dacfe"} Feb 23 11:27:21 crc kubenswrapper[4904]: I0223 11:27:21.457503 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thlbh/crc-debug-495jf" Feb 23 11:27:21 crc kubenswrapper[4904]: I0223 11:27:21.498219 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-thlbh/crc-debug-495jf"] Feb 23 11:27:21 crc kubenswrapper[4904]: I0223 11:27:21.511883 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-thlbh/crc-debug-495jf"] Feb 23 11:27:21 crc kubenswrapper[4904]: I0223 11:27:21.582507 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwmj7\" (UniqueName: \"kubernetes.io/projected/acc27be0-dc0f-48e8-9e63-3ad228aa1d99-kube-api-access-cwmj7\") pod \"acc27be0-dc0f-48e8-9e63-3ad228aa1d99\" (UID: \"acc27be0-dc0f-48e8-9e63-3ad228aa1d99\") " Feb 23 11:27:21 crc kubenswrapper[4904]: I0223 11:27:21.583137 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/acc27be0-dc0f-48e8-9e63-3ad228aa1d99-host\") pod \"acc27be0-dc0f-48e8-9e63-3ad228aa1d99\" (UID: \"acc27be0-dc0f-48e8-9e63-3ad228aa1d99\") " Feb 23 11:27:21 crc kubenswrapper[4904]: I0223 11:27:21.583249 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acc27be0-dc0f-48e8-9e63-3ad228aa1d99-host" (OuterVolumeSpecName: "host") pod "acc27be0-dc0f-48e8-9e63-3ad228aa1d99" (UID: "acc27be0-dc0f-48e8-9e63-3ad228aa1d99"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 11:27:21 crc kubenswrapper[4904]: I0223 11:27:21.583903 4904 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/acc27be0-dc0f-48e8-9e63-3ad228aa1d99-host\") on node \"crc\" DevicePath \"\"" Feb 23 11:27:21 crc kubenswrapper[4904]: I0223 11:27:21.587927 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acc27be0-dc0f-48e8-9e63-3ad228aa1d99-kube-api-access-cwmj7" (OuterVolumeSpecName: "kube-api-access-cwmj7") pod "acc27be0-dc0f-48e8-9e63-3ad228aa1d99" (UID: "acc27be0-dc0f-48e8-9e63-3ad228aa1d99"). InnerVolumeSpecName "kube-api-access-cwmj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 11:27:21 crc kubenswrapper[4904]: I0223 11:27:21.685206 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwmj7\" (UniqueName: \"kubernetes.io/projected/acc27be0-dc0f-48e8-9e63-3ad228aa1d99-kube-api-access-cwmj7\") on node \"crc\" DevicePath \"\"" Feb 23 11:27:22 crc kubenswrapper[4904]: I0223 11:27:22.351811 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="464176717cab328fc55a8beaee08f948fa0bb46a5397993d631ffe57546dd2f9" Feb 23 11:27:22 crc kubenswrapper[4904]: I0223 11:27:22.351918 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thlbh/crc-debug-495jf" Feb 23 11:27:22 crc kubenswrapper[4904]: I0223 11:27:22.704671 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-thlbh/crc-debug-9r998"] Feb 23 11:27:22 crc kubenswrapper[4904]: E0223 11:27:22.705349 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acc27be0-dc0f-48e8-9e63-3ad228aa1d99" containerName="container-00" Feb 23 11:27:22 crc kubenswrapper[4904]: I0223 11:27:22.705361 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="acc27be0-dc0f-48e8-9e63-3ad228aa1d99" containerName="container-00" Feb 23 11:27:22 crc kubenswrapper[4904]: I0223 11:27:22.705540 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="acc27be0-dc0f-48e8-9e63-3ad228aa1d99" containerName="container-00" Feb 23 11:27:22 crc kubenswrapper[4904]: I0223 11:27:22.706189 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thlbh/crc-debug-9r998" Feb 23 11:27:22 crc kubenswrapper[4904]: I0223 11:27:22.805565 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/18727b68-9916-4348-89d4-e8ebc079898b-host\") pod \"crc-debug-9r998\" (UID: \"18727b68-9916-4348-89d4-e8ebc079898b\") " pod="openshift-must-gather-thlbh/crc-debug-9r998" Feb 23 11:27:22 crc kubenswrapper[4904]: I0223 11:27:22.805670 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qjgf\" (UniqueName: \"kubernetes.io/projected/18727b68-9916-4348-89d4-e8ebc079898b-kube-api-access-6qjgf\") pod \"crc-debug-9r998\" (UID: \"18727b68-9916-4348-89d4-e8ebc079898b\") " pod="openshift-must-gather-thlbh/crc-debug-9r998" Feb 23 11:27:22 crc kubenswrapper[4904]: I0223 11:27:22.906995 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/18727b68-9916-4348-89d4-e8ebc079898b-host\") pod \"crc-debug-9r998\" (UID: \"18727b68-9916-4348-89d4-e8ebc079898b\") " pod="openshift-must-gather-thlbh/crc-debug-9r998" Feb 23 11:27:22 crc kubenswrapper[4904]: I0223 11:27:22.907099 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qjgf\" (UniqueName: \"kubernetes.io/projected/18727b68-9916-4348-89d4-e8ebc079898b-kube-api-access-6qjgf\") pod \"crc-debug-9r998\" (UID: \"18727b68-9916-4348-89d4-e8ebc079898b\") " pod="openshift-must-gather-thlbh/crc-debug-9r998" Feb 23 11:27:22 crc kubenswrapper[4904]: I0223 11:27:22.907620 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/18727b68-9916-4348-89d4-e8ebc079898b-host\") pod \"crc-debug-9r998\" (UID: \"18727b68-9916-4348-89d4-e8ebc079898b\") " pod="openshift-must-gather-thlbh/crc-debug-9r998" Feb 23 11:27:22 crc kubenswrapper[4904]: I0223 11:27:22.927122 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qjgf\" (UniqueName: \"kubernetes.io/projected/18727b68-9916-4348-89d4-e8ebc079898b-kube-api-access-6qjgf\") pod \"crc-debug-9r998\" (UID: \"18727b68-9916-4348-89d4-e8ebc079898b\") " pod="openshift-must-gather-thlbh/crc-debug-9r998" Feb 23 11:27:23 crc kubenswrapper[4904]: I0223 11:27:23.023619 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thlbh/crc-debug-9r998" Feb 23 11:27:23 crc kubenswrapper[4904]: I0223 11:27:23.267005 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acc27be0-dc0f-48e8-9e63-3ad228aa1d99" path="/var/lib/kubelet/pods/acc27be0-dc0f-48e8-9e63-3ad228aa1d99/volumes" Feb 23 11:27:23 crc kubenswrapper[4904]: I0223 11:27:23.362179 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-thlbh/crc-debug-9r998" event={"ID":"18727b68-9916-4348-89d4-e8ebc079898b","Type":"ContainerStarted","Data":"b2c78a4b3459834db032f649b479091b9295151bdc6ab53780d83bf1afff19f7"} Feb 23 11:27:23 crc kubenswrapper[4904]: I0223 11:27:23.362225 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-thlbh/crc-debug-9r998" event={"ID":"18727b68-9916-4348-89d4-e8ebc079898b","Type":"ContainerStarted","Data":"4506dd48a53313e05cd571a5530990cd28937801ed2f29942c2b2811a1b10c69"} Feb 23 11:27:23 crc kubenswrapper[4904]: I0223 11:27:23.383484 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-thlbh/crc-debug-9r998" podStartSLOduration=1.383462991 podStartE2EDuration="1.383462991s" podCreationTimestamp="2026-02-23 11:27:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 11:27:23.374388733 +0000 UTC m=+4876.794762246" watchObservedRunningTime="2026-02-23 11:27:23.383462991 +0000 UTC m=+4876.803836494" Feb 23 11:27:24 crc kubenswrapper[4904]: I0223 11:27:24.375413 4904 generic.go:334] "Generic (PLEG): container finished" podID="18727b68-9916-4348-89d4-e8ebc079898b" containerID="b2c78a4b3459834db032f649b479091b9295151bdc6ab53780d83bf1afff19f7" exitCode=0 Feb 23 11:27:24 crc kubenswrapper[4904]: I0223 11:27:24.375502 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-thlbh/crc-debug-9r998" event={"ID":"18727b68-9916-4348-89d4-e8ebc079898b","Type":"ContainerDied","Data":"b2c78a4b3459834db032f649b479091b9295151bdc6ab53780d83bf1afff19f7"} Feb 23 11:27:25 crc kubenswrapper[4904]: I0223 11:27:25.484975 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thlbh/crc-debug-9r998" Feb 23 11:27:25 crc kubenswrapper[4904]: I0223 11:27:25.556590 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/18727b68-9916-4348-89d4-e8ebc079898b-host\") pod \"18727b68-9916-4348-89d4-e8ebc079898b\" (UID: \"18727b68-9916-4348-89d4-e8ebc079898b\") " Feb 23 11:27:25 crc kubenswrapper[4904]: I0223 11:27:25.556846 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qjgf\" (UniqueName: \"kubernetes.io/projected/18727b68-9916-4348-89d4-e8ebc079898b-kube-api-access-6qjgf\") pod \"18727b68-9916-4348-89d4-e8ebc079898b\" (UID: \"18727b68-9916-4348-89d4-e8ebc079898b\") " Feb 23 11:27:25 crc kubenswrapper[4904]: I0223 11:27:25.558504 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18727b68-9916-4348-89d4-e8ebc079898b-host" (OuterVolumeSpecName: "host") pod "18727b68-9916-4348-89d4-e8ebc079898b" (UID: "18727b68-9916-4348-89d4-e8ebc079898b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 11:27:25 crc kubenswrapper[4904]: I0223 11:27:25.577956 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18727b68-9916-4348-89d4-e8ebc079898b-kube-api-access-6qjgf" (OuterVolumeSpecName: "kube-api-access-6qjgf") pod "18727b68-9916-4348-89d4-e8ebc079898b" (UID: "18727b68-9916-4348-89d4-e8ebc079898b"). InnerVolumeSpecName "kube-api-access-6qjgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 11:27:25 crc kubenswrapper[4904]: I0223 11:27:25.659583 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qjgf\" (UniqueName: \"kubernetes.io/projected/18727b68-9916-4348-89d4-e8ebc079898b-kube-api-access-6qjgf\") on node \"crc\" DevicePath \"\"" Feb 23 11:27:25 crc kubenswrapper[4904]: I0223 11:27:25.659611 4904 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/18727b68-9916-4348-89d4-e8ebc079898b-host\") on node \"crc\" DevicePath \"\"" Feb 23 11:27:25 crc kubenswrapper[4904]: I0223 11:27:25.890273 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-thlbh/crc-debug-9r998"] Feb 23 11:27:25 crc kubenswrapper[4904]: I0223 11:27:25.907633 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-thlbh/crc-debug-9r998"] Feb 23 11:27:26 crc kubenswrapper[4904]: I0223 11:27:26.396307 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4506dd48a53313e05cd571a5530990cd28937801ed2f29942c2b2811a1b10c69" Feb 23 11:27:26 crc kubenswrapper[4904]: I0223 11:27:26.396358 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thlbh/crc-debug-9r998" Feb 23 11:27:27 crc kubenswrapper[4904]: I0223 11:27:27.103636 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-thlbh/crc-debug-frwl7"] Feb 23 11:27:27 crc kubenswrapper[4904]: E0223 11:27:27.104547 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18727b68-9916-4348-89d4-e8ebc079898b" containerName="container-00" Feb 23 11:27:27 crc kubenswrapper[4904]: I0223 11:27:27.104566 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="18727b68-9916-4348-89d4-e8ebc079898b" containerName="container-00" Feb 23 11:27:27 crc kubenswrapper[4904]: I0223 11:27:27.104860 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="18727b68-9916-4348-89d4-e8ebc079898b" containerName="container-00" Feb 23 11:27:27 crc kubenswrapper[4904]: I0223 11:27:27.107057 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thlbh/crc-debug-frwl7" Feb 23 11:27:27 crc kubenswrapper[4904]: I0223 11:27:27.212236 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee01fe8d-937f-407c-980d-eb830e440294-host\") pod \"crc-debug-frwl7\" (UID: \"ee01fe8d-937f-407c-980d-eb830e440294\") " pod="openshift-must-gather-thlbh/crc-debug-frwl7" Feb 23 11:27:27 crc kubenswrapper[4904]: I0223 11:27:27.212688 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xnxg\" (UniqueName: \"kubernetes.io/projected/ee01fe8d-937f-407c-980d-eb830e440294-kube-api-access-7xnxg\") pod \"crc-debug-frwl7\" (UID: \"ee01fe8d-937f-407c-980d-eb830e440294\") " pod="openshift-must-gather-thlbh/crc-debug-frwl7" Feb 23 11:27:27 crc kubenswrapper[4904]: I0223 11:27:27.267795 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18727b68-9916-4348-89d4-e8ebc079898b" path="/var/lib/kubelet/pods/18727b68-9916-4348-89d4-e8ebc079898b/volumes" Feb 23 11:27:27 crc kubenswrapper[4904]: I0223 11:27:27.314638 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xnxg\" (UniqueName: \"kubernetes.io/projected/ee01fe8d-937f-407c-980d-eb830e440294-kube-api-access-7xnxg\") pod \"crc-debug-frwl7\" (UID: \"ee01fe8d-937f-407c-980d-eb830e440294\") " pod="openshift-must-gather-thlbh/crc-debug-frwl7" Feb 23 11:27:27 crc kubenswrapper[4904]: I0223 11:27:27.314853 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee01fe8d-937f-407c-980d-eb830e440294-host\") pod \"crc-debug-frwl7\" (UID: \"ee01fe8d-937f-407c-980d-eb830e440294\") " pod="openshift-must-gather-thlbh/crc-debug-frwl7" Feb 23 11:27:27 crc kubenswrapper[4904]: I0223 11:27:27.315067 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee01fe8d-937f-407c-980d-eb830e440294-host\") pod \"crc-debug-frwl7\" (UID: \"ee01fe8d-937f-407c-980d-eb830e440294\") " pod="openshift-must-gather-thlbh/crc-debug-frwl7" Feb 23 11:27:27 crc kubenswrapper[4904]: I0223 11:27:27.821333 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xnxg\" (UniqueName: \"kubernetes.io/projected/ee01fe8d-937f-407c-980d-eb830e440294-kube-api-access-7xnxg\") pod \"crc-debug-frwl7\" (UID: \"ee01fe8d-937f-407c-980d-eb830e440294\") " pod="openshift-must-gather-thlbh/crc-debug-frwl7" Feb 23 11:27:28 crc kubenswrapper[4904]: I0223 11:27:28.023947 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thlbh/crc-debug-frwl7" Feb 23 11:27:28 crc kubenswrapper[4904]: W0223 11:27:28.069526 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee01fe8d_937f_407c_980d_eb830e440294.slice/crio-b120209bd1a88ac5929999a022c2b5b8b9846170c4c0d41b3753a042b8691f27 WatchSource:0}: Error finding container b120209bd1a88ac5929999a022c2b5b8b9846170c4c0d41b3753a042b8691f27: Status 404 returned error can't find the container with id b120209bd1a88ac5929999a022c2b5b8b9846170c4c0d41b3753a042b8691f27 Feb 23 11:27:28 crc kubenswrapper[4904]: I0223 11:27:28.416524 4904 generic.go:334] "Generic (PLEG): container finished" podID="ee01fe8d-937f-407c-980d-eb830e440294" containerID="70846f69145242a5013de87b1b6e9514e78851bdadd6f4baff7e3b5738fb4eee" exitCode=0 Feb 23 11:27:28 crc kubenswrapper[4904]: I0223 11:27:28.416732 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-thlbh/crc-debug-frwl7" event={"ID":"ee01fe8d-937f-407c-980d-eb830e440294","Type":"ContainerDied","Data":"70846f69145242a5013de87b1b6e9514e78851bdadd6f4baff7e3b5738fb4eee"} Feb 23 11:27:28 crc kubenswrapper[4904]: I0223 11:27:28.417006 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-thlbh/crc-debug-frwl7" event={"ID":"ee01fe8d-937f-407c-980d-eb830e440294","Type":"ContainerStarted","Data":"b120209bd1a88ac5929999a022c2b5b8b9846170c4c0d41b3753a042b8691f27"} Feb 23 11:27:28 crc kubenswrapper[4904]: I0223 11:27:28.478759 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-thlbh/crc-debug-frwl7"] Feb 23 11:27:28 crc kubenswrapper[4904]: I0223 11:27:28.498236 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-thlbh/crc-debug-frwl7"] Feb 23 11:27:29 crc kubenswrapper[4904]: I0223 11:27:29.397828 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2n45f"] Feb 23 11:27:29 crc kubenswrapper[4904]: E0223 11:27:29.398572 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee01fe8d-937f-407c-980d-eb830e440294" containerName="container-00" Feb 23 11:27:29 crc kubenswrapper[4904]: I0223 11:27:29.398596 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee01fe8d-937f-407c-980d-eb830e440294" containerName="container-00" Feb 23 11:27:29 crc kubenswrapper[4904]: I0223 11:27:29.398850 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee01fe8d-937f-407c-980d-eb830e440294" containerName="container-00" Feb 23 11:27:29 crc kubenswrapper[4904]: I0223 11:27:29.400680 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2n45f" Feb 23 11:27:29 crc kubenswrapper[4904]: I0223 11:27:29.416640 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2n45f"] Feb 23 11:27:29 crc kubenswrapper[4904]: I0223 11:27:29.497474 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8-catalog-content\") pod \"certified-operators-2n45f\" (UID: \"c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8\") " pod="openshift-marketplace/certified-operators-2n45f" Feb 23 11:27:29 crc kubenswrapper[4904]: I0223 11:27:29.497629 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvbq9\" (UniqueName: \"kubernetes.io/projected/c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8-kube-api-access-bvbq9\") pod \"certified-operators-2n45f\" (UID: \"c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8\") " pod="openshift-marketplace/certified-operators-2n45f" Feb 23 11:27:29 crc kubenswrapper[4904]: I0223 11:27:29.497901 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8-utilities\") pod \"certified-operators-2n45f\" (UID: \"c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8\") " pod="openshift-marketplace/certified-operators-2n45f" Feb 23 11:27:29 crc kubenswrapper[4904]: I0223 11:27:29.528948 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thlbh/crc-debug-frwl7" Feb 23 11:27:29 crc kubenswrapper[4904]: I0223 11:27:29.599371 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8-utilities\") pod \"certified-operators-2n45f\" (UID: \"c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8\") " pod="openshift-marketplace/certified-operators-2n45f" Feb 23 11:27:29 crc kubenswrapper[4904]: I0223 11:27:29.599452 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8-catalog-content\") pod \"certified-operators-2n45f\" (UID: \"c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8\") " pod="openshift-marketplace/certified-operators-2n45f" Feb 23 11:27:29 crc kubenswrapper[4904]: I0223 11:27:29.599528 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvbq9\" (UniqueName: \"kubernetes.io/projected/c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8-kube-api-access-bvbq9\") pod \"certified-operators-2n45f\" (UID: \"c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8\") " pod="openshift-marketplace/certified-operators-2n45f" Feb 23 11:27:29 crc kubenswrapper[4904]: I0223 11:27:29.600291 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8-utilities\") pod \"certified-operators-2n45f\" (UID: \"c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8\") " pod="openshift-marketplace/certified-operators-2n45f" Feb 23 11:27:29 crc kubenswrapper[4904]: I0223 11:27:29.600412 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8-catalog-content\") pod \"certified-operators-2n45f\" (UID: \"c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8\") " pod="openshift-marketplace/certified-operators-2n45f" Feb 23 11:27:29 crc kubenswrapper[4904]: I0223 11:27:29.617766 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvbq9\" (UniqueName: \"kubernetes.io/projected/c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8-kube-api-access-bvbq9\") pod \"certified-operators-2n45f\" (UID: \"c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8\") " pod="openshift-marketplace/certified-operators-2n45f" Feb 23 11:27:29 crc kubenswrapper[4904]: I0223 11:27:29.700516 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xnxg\" (UniqueName: \"kubernetes.io/projected/ee01fe8d-937f-407c-980d-eb830e440294-kube-api-access-7xnxg\") pod \"ee01fe8d-937f-407c-980d-eb830e440294\" (UID: \"ee01fe8d-937f-407c-980d-eb830e440294\") " Feb 23 11:27:29 crc kubenswrapper[4904]: I0223 11:27:29.700733 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee01fe8d-937f-407c-980d-eb830e440294-host\") pod \"ee01fe8d-937f-407c-980d-eb830e440294\" (UID: \"ee01fe8d-937f-407c-980d-eb830e440294\") " Feb 23 11:27:29 crc kubenswrapper[4904]: I0223 11:27:29.701377 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee01fe8d-937f-407c-980d-eb830e440294-host" (OuterVolumeSpecName: "host") pod "ee01fe8d-937f-407c-980d-eb830e440294" (UID: "ee01fe8d-937f-407c-980d-eb830e440294"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 11:27:29 crc kubenswrapper[4904]: I0223 11:27:29.809467 4904 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee01fe8d-937f-407c-980d-eb830e440294-host\") on node \"crc\" DevicePath \"\"" Feb 23 11:27:29 crc kubenswrapper[4904]: I0223 11:27:29.825685 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2n45f" Feb 23 11:27:30 crc kubenswrapper[4904]: I0223 11:27:30.120031 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee01fe8d-937f-407c-980d-eb830e440294-kube-api-access-7xnxg" (OuterVolumeSpecName: "kube-api-access-7xnxg") pod "ee01fe8d-937f-407c-980d-eb830e440294" (UID: "ee01fe8d-937f-407c-980d-eb830e440294"). InnerVolumeSpecName "kube-api-access-7xnxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 11:27:30 crc kubenswrapper[4904]: I0223 11:27:30.216565 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xnxg\" (UniqueName: \"kubernetes.io/projected/ee01fe8d-937f-407c-980d-eb830e440294-kube-api-access-7xnxg\") on node \"crc\" DevicePath \"\"" Feb 23 11:27:30 crc kubenswrapper[4904]: I0223 11:27:30.433793 4904 scope.go:117] "RemoveContainer" containerID="70846f69145242a5013de87b1b6e9514e78851bdadd6f4baff7e3b5738fb4eee" Feb 23 11:27:30 crc kubenswrapper[4904]: I0223 11:27:30.433871 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thlbh/crc-debug-frwl7" Feb 23 11:27:30 crc kubenswrapper[4904]: I0223 11:27:30.561090 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2n45f"] Feb 23 11:27:30 crc kubenswrapper[4904]: W0223 11:27:30.570569 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc42e475b_ecfe_445c_ab98_2f2bcc3a2bb8.slice/crio-b7e9d9ca4e34b68a376c7aef6b943b76f126f0752d2e6aa826f37fdab3b78c2f WatchSource:0}: Error finding container b7e9d9ca4e34b68a376c7aef6b943b76f126f0752d2e6aa826f37fdab3b78c2f: Status 404 returned error can't find the container with id b7e9d9ca4e34b68a376c7aef6b943b76f126f0752d2e6aa826f37fdab3b78c2f Feb 23 11:27:31 crc kubenswrapper[4904]: I0223 11:27:31.270194 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee01fe8d-937f-407c-980d-eb830e440294" path="/var/lib/kubelet/pods/ee01fe8d-937f-407c-980d-eb830e440294/volumes" Feb 23 11:27:31 crc kubenswrapper[4904]: I0223 11:27:31.450496 4904 generic.go:334] "Generic (PLEG): container finished" podID="c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8" containerID="fd6f46868ad29de90eabfe0667c14ec4e7fa409ce819f85a3da8f571a919366b" exitCode=0 Feb 23 11:27:31 crc kubenswrapper[4904]: I0223 11:27:31.450535 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n45f" event={"ID":"c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8","Type":"ContainerDied","Data":"fd6f46868ad29de90eabfe0667c14ec4e7fa409ce819f85a3da8f571a919366b"} Feb 23 11:27:31 crc kubenswrapper[4904]: I0223 11:27:31.450557 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n45f" event={"ID":"c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8","Type":"ContainerStarted","Data":"b7e9d9ca4e34b68a376c7aef6b943b76f126f0752d2e6aa826f37fdab3b78c2f"} Feb 23 11:27:32 crc kubenswrapper[4904]: I0223 11:27:32.465495 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n45f" event={"ID":"c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8","Type":"ContainerStarted","Data":"eac1cc54ba2b3dd88eacc21704a24999956251e09463d655670cc4ce319def4a"} Feb 23 11:27:34 crc kubenswrapper[4904]: I0223 11:27:34.484883 4904 generic.go:334] "Generic (PLEG): container finished" podID="c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8" containerID="eac1cc54ba2b3dd88eacc21704a24999956251e09463d655670cc4ce319def4a" exitCode=0 Feb 23 11:27:34 crc kubenswrapper[4904]: I0223 11:27:34.484952 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n45f" event={"ID":"c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8","Type":"ContainerDied","Data":"eac1cc54ba2b3dd88eacc21704a24999956251e09463d655670cc4ce319def4a"} Feb 23 11:27:35 crc kubenswrapper[4904]: I0223 11:27:35.496492 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n45f" event={"ID":"c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8","Type":"ContainerStarted","Data":"b794eef211b5c1dc84b299423d285a94b941e0989cee0a2baeb2afb0e8e9c373"} Feb 23 11:27:39 crc kubenswrapper[4904]: I0223 11:27:39.830559 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2n45f" Feb 23 11:27:39 crc kubenswrapper[4904]: I0223 11:27:39.841494 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2n45f" Feb 23 11:27:39 crc kubenswrapper[4904]: I0223 11:27:39.972625 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2n45f" Feb 23 11:27:39 crc kubenswrapper[4904]: I0223 11:27:39.996126 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2n45f" podStartSLOduration=7.581782015 podStartE2EDuration="10.996105481s" podCreationTimestamp="2026-02-23 11:27:29 +0000 UTC" firstStartedPulling="2026-02-23 11:27:31.453659538 +0000 UTC m=+4884.874033041" lastFinishedPulling="2026-02-23 11:27:34.867982994 +0000 UTC m=+4888.288356507" observedRunningTime="2026-02-23 11:27:35.517602697 +0000 UTC m=+4888.937976210" watchObservedRunningTime="2026-02-23 11:27:39.996105481 +0000 UTC m=+4893.416479014" Feb 23 11:27:40 crc kubenswrapper[4904]: I0223 11:27:40.593019 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2n45f" Feb 23 11:27:40 crc kubenswrapper[4904]: I0223 11:27:40.649887 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2n45f"] Feb 23 11:27:42 crc kubenswrapper[4904]: I0223 11:27:42.548726 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2n45f" podUID="c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8" containerName="registry-server" containerID="cri-o://b794eef211b5c1dc84b299423d285a94b941e0989cee0a2baeb2afb0e8e9c373" gracePeriod=2 Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.033687 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2n45f" Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.193802 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8-catalog-content\") pod \"c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8\" (UID: \"c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8\") " Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.193939 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvbq9\" (UniqueName: \"kubernetes.io/projected/c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8-kube-api-access-bvbq9\") pod \"c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8\" (UID: \"c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8\") " Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.193976 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8-utilities\") pod \"c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8\" (UID: \"c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8\") " Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.194958 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8-utilities" (OuterVolumeSpecName: "utilities") pod "c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8" (UID: "c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.195497 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.200748 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8-kube-api-access-bvbq9" (OuterVolumeSpecName: "kube-api-access-bvbq9") pod "c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8" (UID: "c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8"). InnerVolumeSpecName "kube-api-access-bvbq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.248568 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8" (UID: "c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.297885 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.297926 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvbq9\" (UniqueName: \"kubernetes.io/projected/c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8-kube-api-access-bvbq9\") on node \"crc\" DevicePath \"\"" Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.568875 4904 generic.go:334] "Generic (PLEG): container finished" podID="c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8" containerID="b794eef211b5c1dc84b299423d285a94b941e0989cee0a2baeb2afb0e8e9c373" exitCode=0 Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.568916 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n45f" event={"ID":"c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8","Type":"ContainerDied","Data":"b794eef211b5c1dc84b299423d285a94b941e0989cee0a2baeb2afb0e8e9c373"} Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.568942 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n45f" event={"ID":"c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8","Type":"ContainerDied","Data":"b7e9d9ca4e34b68a376c7aef6b943b76f126f0752d2e6aa826f37fdab3b78c2f"} Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.568960 4904 scope.go:117] "RemoveContainer" containerID="b794eef211b5c1dc84b299423d285a94b941e0989cee0a2baeb2afb0e8e9c373" Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.569098 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2n45f" Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.613269 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2n45f"] Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.615691 4904 scope.go:117] "RemoveContainer" containerID="eac1cc54ba2b3dd88eacc21704a24999956251e09463d655670cc4ce319def4a" Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.627488 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2n45f"] Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.637118 4904 scope.go:117] "RemoveContainer" containerID="fd6f46868ad29de90eabfe0667c14ec4e7fa409ce819f85a3da8f571a919366b" Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.681516 4904 scope.go:117] "RemoveContainer" containerID="b794eef211b5c1dc84b299423d285a94b941e0989cee0a2baeb2afb0e8e9c373" Feb 23 11:27:43 crc kubenswrapper[4904]: E0223 11:27:43.682094 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b794eef211b5c1dc84b299423d285a94b941e0989cee0a2baeb2afb0e8e9c373\": container with ID starting with b794eef211b5c1dc84b299423d285a94b941e0989cee0a2baeb2afb0e8e9c373 not found: ID does not exist" containerID="b794eef211b5c1dc84b299423d285a94b941e0989cee0a2baeb2afb0e8e9c373" Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.682137 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b794eef211b5c1dc84b299423d285a94b941e0989cee0a2baeb2afb0e8e9c373"} err="failed to get container status \"b794eef211b5c1dc84b299423d285a94b941e0989cee0a2baeb2afb0e8e9c373\": rpc error: code = NotFound desc = could not find container \"b794eef211b5c1dc84b299423d285a94b941e0989cee0a2baeb2afb0e8e9c373\": container with ID starting with b794eef211b5c1dc84b299423d285a94b941e0989cee0a2baeb2afb0e8e9c373 not found: ID does not exist" Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.682165 4904 scope.go:117] "RemoveContainer" containerID="eac1cc54ba2b3dd88eacc21704a24999956251e09463d655670cc4ce319def4a" Feb 23 11:27:43 crc kubenswrapper[4904]: E0223 11:27:43.683388 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eac1cc54ba2b3dd88eacc21704a24999956251e09463d655670cc4ce319def4a\": container with ID starting with eac1cc54ba2b3dd88eacc21704a24999956251e09463d655670cc4ce319def4a not found: ID does not exist" containerID="eac1cc54ba2b3dd88eacc21704a24999956251e09463d655670cc4ce319def4a" Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.683418 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eac1cc54ba2b3dd88eacc21704a24999956251e09463d655670cc4ce319def4a"} err="failed to get container status \"eac1cc54ba2b3dd88eacc21704a24999956251e09463d655670cc4ce319def4a\": rpc error: code = NotFound desc = could not find container \"eac1cc54ba2b3dd88eacc21704a24999956251e09463d655670cc4ce319def4a\": container with ID starting with eac1cc54ba2b3dd88eacc21704a24999956251e09463d655670cc4ce319def4a not found: ID does not exist" Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.683441 4904 scope.go:117] "RemoveContainer" containerID="fd6f46868ad29de90eabfe0667c14ec4e7fa409ce819f85a3da8f571a919366b" Feb 23 11:27:43 crc kubenswrapper[4904]: E0223 11:27:43.683688 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd6f46868ad29de90eabfe0667c14ec4e7fa409ce819f85a3da8f571a919366b\": container with ID starting with fd6f46868ad29de90eabfe0667c14ec4e7fa409ce819f85a3da8f571a919366b not found: ID does not exist" containerID="fd6f46868ad29de90eabfe0667c14ec4e7fa409ce819f85a3da8f571a919366b" Feb 23 11:27:43 crc kubenswrapper[4904]: I0223 11:27:43.683710 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd6f46868ad29de90eabfe0667c14ec4e7fa409ce819f85a3da8f571a919366b"} err="failed to get container status \"fd6f46868ad29de90eabfe0667c14ec4e7fa409ce819f85a3da8f571a919366b\": rpc error: code = NotFound desc = could not find container \"fd6f46868ad29de90eabfe0667c14ec4e7fa409ce819f85a3da8f571a919366b\": container with ID starting with fd6f46868ad29de90eabfe0667c14ec4e7fa409ce819f85a3da8f571a919366b not found: ID does not exist" Feb 23 11:27:45 crc kubenswrapper[4904]: I0223 11:27:45.267206 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8" path="/var/lib/kubelet/pods/c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8/volumes" Feb 23 11:28:02 crc kubenswrapper[4904]: I0223 11:28:02.816822 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-f6485bd78-lkn6x_ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7/barbican-api/0.log" Feb 23 11:28:03 crc kubenswrapper[4904]: I0223 11:28:03.239781 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-f6485bd78-lkn6x_ed1d8fb3-01b5-49ff-b7c3-66cfb7ff32e7/barbican-api-log/0.log" Feb 23 11:28:03 crc kubenswrapper[4904]: I0223 11:28:03.485178 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6955ddcccd-p7hv5_21c928c2-98cf-48a2-b04b-e7520b36c73a/barbican-keystone-listener/0.log" Feb 23 11:28:03 crc kubenswrapper[4904]: I0223 11:28:03.561251 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6955ddcccd-p7hv5_21c928c2-98cf-48a2-b04b-e7520b36c73a/barbican-keystone-listener-log/0.log" Feb 23 11:28:03 crc kubenswrapper[4904]: I0223 11:28:03.720118 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7d8dc7bdcf-447fv_fd21ac44-8c6b-4db7-b5ed-76b0503419dc/barbican-worker/0.log" Feb 23 11:28:03 crc kubenswrapper[4904]: I0223 11:28:03.789648 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7d8dc7bdcf-447fv_fd21ac44-8c6b-4db7-b5ed-76b0503419dc/barbican-worker-log/0.log" Feb 23 11:28:03 crc kubenswrapper[4904]: I0223 11:28:03.897322 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-qm5rg_da651589-0c88-4249-9dff-de1c46412cf5/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 11:28:04 crc kubenswrapper[4904]: I0223 11:28:04.052252 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b4cc4234-fb72-4d00-95a6-82a77f062057/ceilometer-central-agent/0.log" Feb 23 11:28:04 crc kubenswrapper[4904]: I0223 11:28:04.118902 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b4cc4234-fb72-4d00-95a6-82a77f062057/ceilometer-notification-agent/0.log" Feb 23 11:28:04 crc kubenswrapper[4904]: I0223 11:28:04.157879 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b4cc4234-fb72-4d00-95a6-82a77f062057/proxy-httpd/0.log" Feb 23 11:28:04 crc kubenswrapper[4904]: I0223 11:28:04.233164 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b4cc4234-fb72-4d00-95a6-82a77f062057/sg-core/0.log" Feb 23 11:28:04 crc kubenswrapper[4904]: I0223 11:28:04.376105 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fe617375-c009-420b-bcad-a5a2a2bae412/cinder-api-log/0.log" Feb 23 11:28:04 crc kubenswrapper[4904]: I0223 11:28:04.411279 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fe617375-c009-420b-bcad-a5a2a2bae412/cinder-api/0.log" Feb 23 11:28:04 crc kubenswrapper[4904]: I0223 11:28:04.616814 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1/cinder-scheduler/0.log" Feb 23 11:28:04 crc kubenswrapper[4904]: I0223 11:28:04.692684 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f91b9d45-fe1b-4e6a-b9dd-7edb5ab619f1/probe/0.log" Feb 23 11:28:04 crc kubenswrapper[4904]: I0223 11:28:04.732488 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-v7t6f_3abf50a2-3000-41eb-97fc-814e2b55cd58/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 11:28:04 crc kubenswrapper[4904]: I0223 11:28:04.923859 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-f2dcw_7a5d553f-5efd-4ddf-953d-474d747de8f0/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 11:28:04 crc kubenswrapper[4904]: I0223 11:28:04.979905 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-f4d4c4b7-rfkgb_20e81db0-f0e3-4948-9f05-eb34de21e118/init/0.log" Feb 23 11:28:05 crc kubenswrapper[4904]: I0223 11:28:05.129482 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-f4d4c4b7-rfkgb_20e81db0-f0e3-4948-9f05-eb34de21e118/init/0.log" Feb 23 11:28:05 crc kubenswrapper[4904]: I0223 11:28:05.274411 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-f4d4c4b7-rfkgb_20e81db0-f0e3-4948-9f05-eb34de21e118/dnsmasq-dns/0.log" Feb 23 11:28:05 crc kubenswrapper[4904]: I0223 11:28:05.285151 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-hszzb_3b2f45c5-2ae5-43d7-8845-333ed3242dab/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 11:28:05 crc kubenswrapper[4904]: I0223 11:28:05.430113 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_070efab3-5f9b-464c-8717-89ddd79f1ec9/glance-httpd/0.log" Feb 23 11:28:05 crc kubenswrapper[4904]: I0223 11:28:05.651644 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_070efab3-5f9b-464c-8717-89ddd79f1ec9/glance-log/0.log" Feb 23 11:28:05 crc kubenswrapper[4904]: I0223 11:28:05.856408 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_262fefd2-494e-4121-97f8-9c3e66e9afd7/glance-log/0.log" Feb 23 11:28:05 crc kubenswrapper[4904]: I0223 11:28:05.892172 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_262fefd2-494e-4121-97f8-9c3e66e9afd7/glance-httpd/0.log" Feb 23 11:28:06 crc kubenswrapper[4904]: I0223 11:28:06.119440 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7cbb478958-6t4v7_e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b/horizon/0.log" Feb 23 11:28:06 crc kubenswrapper[4904]: I0223 11:28:06.229204 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-d26ck_4df029b4-8135-45ce-a861-bd06d35ee0ab/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 11:28:06 crc kubenswrapper[4904]: I0223 11:28:06.666259 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7cbb478958-6t4v7_e3ed344c-65ef-4fcf-bf9a-e3e703c7e12b/horizon-log/0.log" Feb 23 11:28:06 crc kubenswrapper[4904]: I0223 11:28:06.707336 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-stq5n_07a003d0-1511-4391-a569-fa105e6bdf07/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 11:28:06 crc kubenswrapper[4904]: I0223 11:28:06.988987 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29530741-c4wkf_ea4a64d0-ed1f-4380-b948-fe750eb2c9af/keystone-cron/0.log" Feb 23 11:28:06 crc kubenswrapper[4904]: I0223 11:28:06.994377 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_50e2ff27-2573-4549-b31e-3fba348ec929/kube-state-metrics/0.log" Feb 23 11:28:07 crc kubenswrapper[4904]: I0223 11:28:07.208797 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-84d4456f94-cxsx9_87a8d0d0-2e01-4089-8a6c-722c46bd362b/keystone-api/0.log" Feb 23 11:28:07 crc kubenswrapper[4904]: I0223 11:28:07.349882 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-qh9wb_7c4a4b83-33c3-418e-a2b7-ad52490fc88a/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 11:28:07 crc kubenswrapper[4904]: I0223 11:28:07.691652 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5c456b7c45-bb96t_09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0/neutron-httpd/0.log" Feb 23 11:28:07 crc kubenswrapper[4904]: I0223 11:28:07.720963 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-47tzt_acb3154a-4b24-44c3-88f3-0c769ca1354d/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 11:28:07 crc kubenswrapper[4904]: I0223 11:28:07.772640 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5c456b7c45-bb96t_09a0ae63-3623-4ebc-ad0b-d71a1d0d6ba0/neutron-api/0.log" Feb 23 11:28:08 crc kubenswrapper[4904]: I0223 11:28:08.484881 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_581bdca5-1a15-48a4-bbf5-941d419f276d/nova-cell0-conductor-conductor/0.log" Feb 23 11:28:08 crc kubenswrapper[4904]: I0223 11:28:08.862600 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_1c8e85f3-888e-4a20-a6be-bed2b85f2b45/nova-cell1-conductor-conductor/0.log" Feb 23 11:28:09 crc kubenswrapper[4904]: I0223 11:28:09.050792 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_5f2eaf23-ec01-4da4-ab9b-ce90633dff13/nova-api-log/0.log" Feb 23 11:28:09 crc kubenswrapper[4904]: I0223 11:28:09.097035 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_d3a2b238-4696-42c9-b713-365220c2ce44/nova-cell1-novncproxy-novncproxy/0.log" Feb 23 11:28:09 crc kubenswrapper[4904]: I0223 11:28:09.245380 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_5f2eaf23-ec01-4da4-ab9b-ce90633dff13/nova-api-api/0.log" Feb 23 11:28:09 crc kubenswrapper[4904]: I0223 11:28:09.381784 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-vqkfb_97a52242-6885-47ca-8ee9-6f11cdadad18/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 11:28:09 crc kubenswrapper[4904]: I0223 11:28:09.450012 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_018a67a3-d954-4259-9c05-298dad7d5e9d/nova-metadata-log/0.log" Feb 23 11:28:09 crc kubenswrapper[4904]: I0223 11:28:09.883598 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_404e5fa5-dbcb-4e7e-ad52-96f65cb16015/mysql-bootstrap/0.log" Feb 23 11:28:09 crc kubenswrapper[4904]: I0223 11:28:09.920143 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_1b9599c4-3c50-48f8-b978-0628ef4f799c/nova-scheduler-scheduler/0.log" Feb 23 11:28:10 crc kubenswrapper[4904]: I0223 11:28:10.063752 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_404e5fa5-dbcb-4e7e-ad52-96f65cb16015/mysql-bootstrap/0.log" Feb 23 11:28:10 crc kubenswrapper[4904]: I0223 11:28:10.157963 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_404e5fa5-dbcb-4e7e-ad52-96f65cb16015/galera/0.log" Feb 23 11:28:10 crc kubenswrapper[4904]: I0223 11:28:10.286034 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c2e03468-b21e-4a61-afd3-08f3c10c102d/mysql-bootstrap/0.log" Feb 23 11:28:10 crc kubenswrapper[4904]: I0223 11:28:10.510931 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c2e03468-b21e-4a61-afd3-08f3c10c102d/mysql-bootstrap/0.log" Feb 23 11:28:10 crc kubenswrapper[4904]: I0223 11:28:10.546845 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c2e03468-b21e-4a61-afd3-08f3c10c102d/galera/0.log" Feb 23 11:28:10 crc kubenswrapper[4904]: I0223 11:28:10.689197 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_f6d61f1b-d4a2-4fb0-a12a-7383c50e07c8/openstackclient/0.log" Feb 23 11:28:10 crc kubenswrapper[4904]: I0223 11:28:10.779590 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-zqsmt_28bca101-cf50-4eba-a2fe-e55dbc4fe121/openstack-network-exporter/0.log" Feb 23 11:28:11 crc kubenswrapper[4904]: I0223 11:28:11.045473 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-82gvj_7aa76c96-25cf-4196-a18f-9a33f9d9e195/ovsdb-server-init/0.log" Feb 23 11:28:11 crc kubenswrapper[4904]: I0223 11:28:11.199254 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-82gvj_7aa76c96-25cf-4196-a18f-9a33f9d9e195/ovs-vswitchd/0.log" Feb 23 11:28:11 crc kubenswrapper[4904]: I0223 11:28:11.254920 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-82gvj_7aa76c96-25cf-4196-a18f-9a33f9d9e195/ovsdb-server-init/0.log" Feb 23 11:28:11 crc kubenswrapper[4904]: I0223 11:28:11.263604 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-82gvj_7aa76c96-25cf-4196-a18f-9a33f9d9e195/ovsdb-server/0.log" Feb 23 11:28:11 crc kubenswrapper[4904]: I0223 11:28:11.403472 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_018a67a3-d954-4259-9c05-298dad7d5e9d/nova-metadata-metadata/0.log" Feb 23 11:28:11 crc kubenswrapper[4904]: I0223 11:28:11.503951 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-q7rxf_717c8a73-d7f4-48d3-920d-f573f4f9dc9b/ovn-controller/0.log" Feb 23 11:28:11 crc kubenswrapper[4904]: I0223 11:28:11.658088 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-srcxn_134b7bac-9265-4190-bcdc-847e77ecfce3/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 11:28:11 crc kubenswrapper[4904]: I0223 11:28:11.680107 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_22141935-93c0-47b1-aa17-ca81106c5f5c/ovn-northd/0.log" Feb 23 11:28:11 crc kubenswrapper[4904]: I0223 11:28:11.726662 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_22141935-93c0-47b1-aa17-ca81106c5f5c/openstack-network-exporter/0.log" Feb 23 11:28:11 crc kubenswrapper[4904]: I0223 11:28:11.865257 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c16f60f3-f488-4d2c-858e-dee1662f8f4b/openstack-network-exporter/0.log" Feb 23 11:28:11 crc kubenswrapper[4904]: I0223 11:28:11.925607 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c16f60f3-f488-4d2c-858e-dee1662f8f4b/ovsdbserver-nb/0.log" Feb 23 11:28:12 crc kubenswrapper[4904]: I0223 11:28:12.108056 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e4601812-5c00-4a35-adc9-2003ca6001b2/openstack-network-exporter/0.log" Feb 23 11:28:12 crc kubenswrapper[4904]: I0223 11:28:12.133127 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e4601812-5c00-4a35-adc9-2003ca6001b2/ovsdbserver-sb/0.log" Feb 23 11:28:12 crc kubenswrapper[4904]: I0223 11:28:12.547612 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_203579f0-ffe4-42a4-91fd-b4a4340eb9fc/init-config-reloader/0.log" Feb 23 11:28:12 crc kubenswrapper[4904]: I0223 11:28:12.560894 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-56bb764cb4-74prt_3a81e676-6e99-4cf5-88a8-78d0bb36f896/placement-api/0.log" Feb 23 11:28:12 crc kubenswrapper[4904]: I0223 11:28:12.597455 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-56bb764cb4-74prt_3a81e676-6e99-4cf5-88a8-78d0bb36f896/placement-log/0.log" Feb 23 11:28:12 crc kubenswrapper[4904]: I0223 11:28:12.715158 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_203579f0-ffe4-42a4-91fd-b4a4340eb9fc/init-config-reloader/0.log" Feb 23 11:28:12 crc kubenswrapper[4904]: I0223 11:28:12.788681 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_203579f0-ffe4-42a4-91fd-b4a4340eb9fc/config-reloader/0.log" Feb 23 11:28:12 crc kubenswrapper[4904]: I0223 11:28:12.820891 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_203579f0-ffe4-42a4-91fd-b4a4340eb9fc/prometheus/0.log" Feb 23 11:28:12 crc kubenswrapper[4904]: I0223 11:28:12.824750 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_203579f0-ffe4-42a4-91fd-b4a4340eb9fc/thanos-sidecar/0.log" Feb 23 11:28:13 crc kubenswrapper[4904]: I0223 11:28:13.007189 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f66afb49-9f1b-43ea-966f-8aaf91eea84a/setup-container/0.log" Feb 23 11:28:13 crc kubenswrapper[4904]: I0223 11:28:13.269795 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f66afb49-9f1b-43ea-966f-8aaf91eea84a/rabbitmq/0.log" Feb 23 11:28:13 crc kubenswrapper[4904]: I0223 11:28:13.309488 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f66afb49-9f1b-43ea-966f-8aaf91eea84a/setup-container/0.log" Feb 23 11:28:13 crc kubenswrapper[4904]: I0223 11:28:13.309677 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8d3577f6-3d30-4a6c-9485-0429f1eb87f5/setup-container/0.log" Feb 23 11:28:13 crc kubenswrapper[4904]: I0223 11:28:13.507339 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8d3577f6-3d30-4a6c-9485-0429f1eb87f5/rabbitmq/0.log" Feb 23 11:28:13 crc kubenswrapper[4904]: I0223 11:28:13.559534 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8d3577f6-3d30-4a6c-9485-0429f1eb87f5/setup-container/0.log" Feb 23 11:28:13 crc kubenswrapper[4904]: I0223 11:28:13.598299 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-cv6nq_7fb77479-c222-4b0b-92fa-7405cae94fd9/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 11:28:13 crc kubenswrapper[4904]: I0223 11:28:13.773974 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-xd75w_8987115a-6e48-49c4-bf47-aad12518b1d4/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 11:28:14 crc kubenswrapper[4904]: I0223 11:28:14.082270 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-vhd76_acc091bb-7179-4ff4-ad67-3134e8143c90/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 11:28:14 crc kubenswrapper[4904]: I0223 11:28:14.210900 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-tc422_24bb648a-839a-4b68-80ae-949a7995921d/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 11:28:14 crc kubenswrapper[4904]: I0223 11:28:14.287796 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-kxtnf_9c82fdc6-273e-4aaf-80b9-d6c461d0a40b/ssh-known-hosts-edpm-deployment/0.log" Feb 23 11:28:14 crc kubenswrapper[4904]: I0223 11:28:14.529354 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-d55dc77cc-gg7pn_7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f/proxy-server/0.log" Feb 23 11:28:14 crc kubenswrapper[4904]: I0223 11:28:14.690977 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-hddnt_8b69c9fa-305e-484f-98e7-c8928bec7a13/swift-ring-rebalance/0.log" Feb 23 11:28:14 crc kubenswrapper[4904]: I0223 11:28:14.695980 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-d55dc77cc-gg7pn_7b9863a0-ae98-4daf-8cfe-eaaeb5f3d62f/proxy-httpd/0.log" Feb 23 11:28:14 crc kubenswrapper[4904]: I0223 11:28:14.831119 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_71f24a32-6e0a-4a39-9570-92c373672a9b/account-auditor/0.log" Feb 23 11:28:14 crc kubenswrapper[4904]: I0223 11:28:14.906781 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_71f24a32-6e0a-4a39-9570-92c373672a9b/account-reaper/0.log" Feb 23 11:28:14 crc kubenswrapper[4904]: I0223 11:28:14.985486 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_71f24a32-6e0a-4a39-9570-92c373672a9b/account-replicator/0.log" Feb 23 11:28:15 crc kubenswrapper[4904]: I0223 11:28:15.033984 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_71f24a32-6e0a-4a39-9570-92c373672a9b/account-server/0.log" Feb 23 11:28:15 crc kubenswrapper[4904]: I0223 11:28:15.095257 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_71f24a32-6e0a-4a39-9570-92c373672a9b/container-auditor/0.log" Feb 23 11:28:15 crc kubenswrapper[4904]: I0223 11:28:15.170783 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_71f24a32-6e0a-4a39-9570-92c373672a9b/container-replicator/0.log" Feb 23 11:28:15 crc kubenswrapper[4904]: I0223 11:28:15.238874 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_71f24a32-6e0a-4a39-9570-92c373672a9b/container-server/0.log" Feb 23 11:28:15 crc kubenswrapper[4904]: I0223 11:28:15.267024 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_71f24a32-6e0a-4a39-9570-92c373672a9b/container-updater/0.log" Feb 23 11:28:15 crc kubenswrapper[4904]: I0223 11:28:15.329494 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_71f24a32-6e0a-4a39-9570-92c373672a9b/object-auditor/0.log" Feb 23 11:28:15 crc kubenswrapper[4904]: I0223 11:28:15.342367 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_03ef630b-9a38-4867-9a0a-d16b2c1804a8/memcached/0.log" Feb 23 11:28:15 crc kubenswrapper[4904]: I0223 11:28:15.376258 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_71f24a32-6e0a-4a39-9570-92c373672a9b/object-expirer/0.log" Feb 23 11:28:15 crc kubenswrapper[4904]: I0223 11:28:15.500025 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_71f24a32-6e0a-4a39-9570-92c373672a9b/object-server/0.log" Feb 23 11:28:15 crc kubenswrapper[4904]: I0223 11:28:15.515443 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_71f24a32-6e0a-4a39-9570-92c373672a9b/object-replicator/0.log" Feb 23 11:28:15 crc kubenswrapper[4904]: I0223 11:28:15.591212 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_71f24a32-6e0a-4a39-9570-92c373672a9b/rsync/0.log" Feb 23 11:28:15 crc kubenswrapper[4904]: I0223 11:28:15.596473 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_71f24a32-6e0a-4a39-9570-92c373672a9b/swift-recon-cron/0.log" Feb 23 11:28:15 crc kubenswrapper[4904]: I0223 11:28:15.604898 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_71f24a32-6e0a-4a39-9570-92c373672a9b/object-updater/0.log" Feb 23 11:28:15 crc kubenswrapper[4904]: I0223 11:28:15.768701 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-tj2b2_e3551d06-5e0e-4a2d-886f-9a617433cfcd/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 11:28:15 crc kubenswrapper[4904]: I0223 11:28:15.958866 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_463cdfc8-f595-4253-8fcf-4da5d843fcf8/tempest-tests-tempest-tests-runner/0.log" Feb 23 11:28:16 crc kubenswrapper[4904]: I0223 11:28:16.500317 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_2afac476-a697-44bb-8c89-8f727c74b150/test-operator-logs-container/0.log" Feb 23 11:28:16 crc kubenswrapper[4904]: I0223 11:28:16.538269 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-qmpbg_2c06f842-3a7c-47c7-a8c7-6738d29cdef7/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 11:28:17 crc kubenswrapper[4904]: I0223 11:28:17.212673 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_eaeb9663-5c3b-4c64-becf-297691ff9f84/watcher-applier/0.log" Feb 23 11:28:17 crc kubenswrapper[4904]: I0223 11:28:17.637541 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61/watcher-api-log/0.log" Feb 23 11:28:18 crc kubenswrapper[4904]: I0223 11:28:18.033990 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_54c70650-ca5e-4eaf-92af-2704a01edf49/watcher-decision-engine/0.log" Feb 23 11:28:19 crc kubenswrapper[4904]: I0223 11:28:19.365826 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_67ec69c8-87eb-48b2-a4ee-3d3ecd58fe61/watcher-api/0.log" Feb 23 11:28:42 crc kubenswrapper[4904]: I0223 11:28:42.921370 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w9mpb"] Feb 23 11:28:42 crc kubenswrapper[4904]: E0223 11:28:42.922283 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8" containerName="registry-server" Feb 23 11:28:42 crc kubenswrapper[4904]: I0223 11:28:42.922294 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8" containerName="registry-server" Feb 23 11:28:42 crc kubenswrapper[4904]: E0223 11:28:42.922310 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8" containerName="extract-content" Feb 23 11:28:42 crc kubenswrapper[4904]: I0223 11:28:42.922316 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8" containerName="extract-content" Feb 23 11:28:42 crc kubenswrapper[4904]: E0223 11:28:42.922339 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8" containerName="extract-utilities" Feb 23 11:28:42 crc kubenswrapper[4904]: I0223 11:28:42.922345 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8" containerName="extract-utilities" Feb 23 11:28:42 crc kubenswrapper[4904]: I0223 11:28:42.922525 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="c42e475b-ecfe-445c-ab98-2f2bcc3a2bb8" containerName="registry-server" Feb 23 11:28:42 crc kubenswrapper[4904]: I0223 11:28:42.923963 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w9mpb" Feb 23 11:28:42 crc kubenswrapper[4904]: I0223 11:28:42.931338 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w9mpb"] Feb 23 11:28:43 crc kubenswrapper[4904]: I0223 11:28:43.001367 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02a1cf92-91ac-4198-8794-8f38a8d625bc-catalog-content\") pod \"community-operators-w9mpb\" (UID: \"02a1cf92-91ac-4198-8794-8f38a8d625bc\") " pod="openshift-marketplace/community-operators-w9mpb" Feb 23 11:28:43 crc kubenswrapper[4904]: I0223 11:28:43.001410 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02a1cf92-91ac-4198-8794-8f38a8d625bc-utilities\") pod \"community-operators-w9mpb\" (UID: \"02a1cf92-91ac-4198-8794-8f38a8d625bc\") " pod="openshift-marketplace/community-operators-w9mpb" Feb 23 11:28:43 crc kubenswrapper[4904]: I0223 11:28:43.001453 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqwzw\" (UniqueName: \"kubernetes.io/projected/02a1cf92-91ac-4198-8794-8f38a8d625bc-kube-api-access-zqwzw\") pod \"community-operators-w9mpb\" (UID: \"02a1cf92-91ac-4198-8794-8f38a8d625bc\") " pod="openshift-marketplace/community-operators-w9mpb" Feb 23 11:28:43 crc kubenswrapper[4904]: I0223 11:28:43.103536 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02a1cf92-91ac-4198-8794-8f38a8d625bc-catalog-content\") pod \"community-operators-w9mpb\" (UID: \"02a1cf92-91ac-4198-8794-8f38a8d625bc\") " pod="openshift-marketplace/community-operators-w9mpb" Feb 23 11:28:43 crc kubenswrapper[4904]: I0223 11:28:43.103590 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02a1cf92-91ac-4198-8794-8f38a8d625bc-utilities\") pod \"community-operators-w9mpb\" (UID: \"02a1cf92-91ac-4198-8794-8f38a8d625bc\") " pod="openshift-marketplace/community-operators-w9mpb" Feb 23 11:28:43 crc kubenswrapper[4904]: I0223 11:28:43.103632 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqwzw\" (UniqueName: \"kubernetes.io/projected/02a1cf92-91ac-4198-8794-8f38a8d625bc-kube-api-access-zqwzw\") pod \"community-operators-w9mpb\" (UID: \"02a1cf92-91ac-4198-8794-8f38a8d625bc\") " pod="openshift-marketplace/community-operators-w9mpb" Feb 23 11:28:43 crc kubenswrapper[4904]: I0223 11:28:43.104304 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02a1cf92-91ac-4198-8794-8f38a8d625bc-catalog-content\") pod \"community-operators-w9mpb\" (UID: \"02a1cf92-91ac-4198-8794-8f38a8d625bc\") " pod="openshift-marketplace/community-operators-w9mpb" Feb 23 11:28:43 crc kubenswrapper[4904]: I0223 11:28:43.104353 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02a1cf92-91ac-4198-8794-8f38a8d625bc-utilities\") pod \"community-operators-w9mpb\" (UID: \"02a1cf92-91ac-4198-8794-8f38a8d625bc\") " pod="openshift-marketplace/community-operators-w9mpb" Feb 23 11:28:43 crc kubenswrapper[4904]: I0223 11:28:43.147599 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqwzw\" (UniqueName: \"kubernetes.io/projected/02a1cf92-91ac-4198-8794-8f38a8d625bc-kube-api-access-zqwzw\") pod \"community-operators-w9mpb\" (UID: \"02a1cf92-91ac-4198-8794-8f38a8d625bc\") " pod="openshift-marketplace/community-operators-w9mpb" Feb 23 11:28:43 crc kubenswrapper[4904]: I0223 11:28:43.265963 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w9mpb" Feb 23 11:28:43 crc kubenswrapper[4904]: I0223 11:28:43.818839 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w9mpb"] Feb 23 11:28:44 crc kubenswrapper[4904]: I0223 11:28:44.140780 4904 generic.go:334] "Generic (PLEG): container finished" podID="02a1cf92-91ac-4198-8794-8f38a8d625bc" containerID="80033777d30de726c55b4eeaaf770bc0ae05b3af59ebb23aa1b04e544d850f8a" exitCode=0 Feb 23 11:28:44 crc kubenswrapper[4904]: I0223 11:28:44.140891 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9mpb" event={"ID":"02a1cf92-91ac-4198-8794-8f38a8d625bc","Type":"ContainerDied","Data":"80033777d30de726c55b4eeaaf770bc0ae05b3af59ebb23aa1b04e544d850f8a"} Feb 23 11:28:44 crc kubenswrapper[4904]: I0223 11:28:44.141079 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9mpb" event={"ID":"02a1cf92-91ac-4198-8794-8f38a8d625bc","Type":"ContainerStarted","Data":"02b64e8fc99ec915f7e637ae681d0865b07d0b001982ea2efde4f3c37f3581eb"} Feb 23 11:28:45 crc kubenswrapper[4904]: I0223 11:28:45.152944 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9mpb" event={"ID":"02a1cf92-91ac-4198-8794-8f38a8d625bc","Type":"ContainerStarted","Data":"d54ca98e94dbfa2240490f5ba6bf5bd201a627eaa00d8ed1e3c5c28b75a5ada1"} Feb 23 11:28:46 crc kubenswrapper[4904]: I0223 11:28:46.164316 4904 generic.go:334] "Generic (PLEG): container finished" podID="02a1cf92-91ac-4198-8794-8f38a8d625bc" containerID="d54ca98e94dbfa2240490f5ba6bf5bd201a627eaa00d8ed1e3c5c28b75a5ada1" exitCode=0 Feb 23 11:28:46 crc kubenswrapper[4904]: I0223 11:28:46.164408 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9mpb" event={"ID":"02a1cf92-91ac-4198-8794-8f38a8d625bc","Type":"ContainerDied","Data":"d54ca98e94dbfa2240490f5ba6bf5bd201a627eaa00d8ed1e3c5c28b75a5ada1"} Feb 23 11:28:47 crc kubenswrapper[4904]: I0223 11:28:47.174834 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9mpb" event={"ID":"02a1cf92-91ac-4198-8794-8f38a8d625bc","Type":"ContainerStarted","Data":"b9f1020454d96f1b272e328e12533504b1385d32d65f63083c627ac2acf208e5"} Feb 23 11:28:47 crc kubenswrapper[4904]: I0223 11:28:47.203200 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w9mpb" podStartSLOduration=2.814591097 podStartE2EDuration="5.203180449s" podCreationTimestamp="2026-02-23 11:28:42 +0000 UTC" firstStartedPulling="2026-02-23 11:28:44.143190183 +0000 UTC m=+4957.563563696" lastFinishedPulling="2026-02-23 11:28:46.531779535 +0000 UTC m=+4959.952153048" observedRunningTime="2026-02-23 11:28:47.195752977 +0000 UTC m=+4960.616126490" watchObservedRunningTime="2026-02-23 11:28:47.203180449 +0000 UTC m=+4960.623553962" Feb 23 11:28:47 crc kubenswrapper[4904]: I0223 11:28:47.634177 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4_416f9591-cdad-4b2f-bf2b-a9af67b1b260/util/0.log" Feb 23 11:28:47 crc kubenswrapper[4904]: I0223 11:28:47.930436 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4_416f9591-cdad-4b2f-bf2b-a9af67b1b260/util/0.log" Feb 23 11:28:47 crc kubenswrapper[4904]: I0223 11:28:47.967200 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4_416f9591-cdad-4b2f-bf2b-a9af67b1b260/pull/0.log" Feb 23 11:28:47 crc kubenswrapper[4904]: I0223 11:28:47.974541 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4_416f9591-cdad-4b2f-bf2b-a9af67b1b260/pull/0.log" Feb 23 11:28:48 crc kubenswrapper[4904]: I0223 11:28:48.146361 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4_416f9591-cdad-4b2f-bf2b-a9af67b1b260/util/0.log" Feb 23 11:28:48 crc kubenswrapper[4904]: I0223 11:28:48.389701 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4_416f9591-cdad-4b2f-bf2b-a9af67b1b260/extract/0.log" Feb 23 11:28:48 crc kubenswrapper[4904]: I0223 11:28:48.403249 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_37a65f179544e70cf83b913c514cc346751ba2a94547f1b9ea02b9aa0anm9l4_416f9591-cdad-4b2f-bf2b-a9af67b1b260/pull/0.log" Feb 23 11:28:48 crc kubenswrapper[4904]: I0223 11:28:48.927065 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-55cc45767f-6qtcx_8101934b-1cbc-45c6-9f81-d9da4c586b55/manager/0.log" Feb 23 11:28:49 crc kubenswrapper[4904]: I0223 11:28:49.251361 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68c6d499cb-hwwbw_e76c5a19-592e-4739-b437-28157ab7d3d5/manager/0.log" Feb 23 11:28:49 crc kubenswrapper[4904]: I0223 11:28:49.374767 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-9595d6797-7tpwb_2be0ca02-8806-415f-addc-9cd1765721dc/manager/0.log" Feb 23 11:28:49 crc kubenswrapper[4904]: I0223 11:28:49.614929 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-54fb488b88-wdw2p_b76d29c0-207f-45c3-a983-6496fd95588e/manager/0.log" Feb 23 11:28:50 crc kubenswrapper[4904]: I0223 11:28:50.260034 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6494cdbf8f-dk89f_47a11eed-a07a-47f8-9b13-2fd4d7610c65/manager/0.log" Feb 23 11:28:50 crc kubenswrapper[4904]: I0223 11:28:50.309430 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-66d6b5f488-5ghdl_86643e54-73df-41f4-a567-6631562e465b/manager/0.log" Feb 23 11:28:50 crc kubenswrapper[4904]: I0223 11:28:50.634340 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-6c78d668d5-b8bjv_d56c4104-6cd0-4d5f-b63e-1be797de40d8/manager/0.log" Feb 23 11:28:50 crc kubenswrapper[4904]: I0223 11:28:50.909567 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-96fff9cb8-f6gqt_87c2d044-3f5b-4a7c-80a9-f70c00310af9/manager/0.log" Feb 23 11:28:51 crc kubenswrapper[4904]: I0223 11:28:51.156801 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66997756f6-hf8p4_82b3f66d-f7fd-4949-bb69-a8203973ce95/manager/0.log" Feb 23 11:28:51 crc kubenswrapper[4904]: I0223 11:28:51.232888 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-57746b5ff9-9gdtf_13a8ec0f-4892-4d72-947d-e87ab49b3262/manager/0.log" Feb 23 11:28:51 crc kubenswrapper[4904]: I0223 11:28:51.378009 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54967dbbdf-h6d6n_1944eba9-ffe2-467e-8ee8-ae7cf23aa1c4/manager/0.log" Feb 23 11:28:51 crc kubenswrapper[4904]: I0223 11:28:51.612381 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5ddd85db87-d6v74_81f9873a-af57-4f93-85cc-df46dbcbcde8/manager/0.log" Feb 23 11:28:51 crc kubenswrapper[4904]: I0223 11:28:51.875765 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-c5677dc5d-ktlfr_4ae6c04a-de30-4cba-8b66-740d209955b8/manager/0.log" Feb 23 11:28:52 crc kubenswrapper[4904]: I0223 11:28:52.164063 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-794bd488b9-t978s_e83028a5-c494-4797-8675-c6fe7d90a156/operator/0.log" Feb 23 11:28:52 crc kubenswrapper[4904]: I0223 11:28:52.366217 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-56xvj_0ae66b0c-ec0d-42ee-902a-280f8a586cf2/registry-server/0.log" Feb 23 11:28:52 crc kubenswrapper[4904]: I0223 11:28:52.846518 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-85c99d655-s95c7_81c9e2b3-cfc0-400f-9534-f9f6ba8f0482/manager/0.log" Feb 23 11:28:53 crc kubenswrapper[4904]: I0223 11:28:53.055165 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57bd55f9b7-wz2gg_c46970b9-27d9-4b4e-a470-20df6b3fd44c/manager/0.log" Feb 23 11:28:53 crc kubenswrapper[4904]: I0223 11:28:53.269547 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w9mpb" Feb 23 11:28:53 crc kubenswrapper[4904]: I0223 11:28:53.269581 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w9mpb" Feb 23 11:28:53 crc kubenswrapper[4904]: I0223 11:28:53.329050 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w9mpb" Feb 23 11:28:53 crc kubenswrapper[4904]: I0223 11:28:53.337586 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-bt9jc_98685ce7-3242-4e1e-94f9-bf90399619d3/operator/0.log" Feb 23 11:28:53 crc kubenswrapper[4904]: I0223 11:28:53.581114 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-79558bbfbf-2mds6_a2e7a886-a671-44c1-909d-90224369a5e2/manager/0.log" Feb 23 11:28:54 crc kubenswrapper[4904]: I0223 11:28:54.019054 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8467ccb4c8-58wnl_58c024f9-9e55-4e30-9dbc-8ba460e4b91d/manager/0.log" Feb 23 11:28:54 crc kubenswrapper[4904]: I0223 11:28:54.168914 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-56dc67d744-x55xc_27e5aeb0-8732-493a-9ec6-ebb846416db9/manager/0.log" Feb 23 11:28:54 crc kubenswrapper[4904]: I0223 11:28:54.296222 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w9mpb" Feb 23 11:28:54 crc kubenswrapper[4904]: I0223 11:28:54.352885 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w9mpb"] Feb 23 11:28:54 crc kubenswrapper[4904]: I0223 11:28:54.468805 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-dd8cbd9bf-pnztz_46699645-af00-4370-a5dd-c1c94361be2b/manager/0.log" Feb 23 11:28:54 crc kubenswrapper[4904]: I0223 11:28:54.499952 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-ccb96f8ff-gxgct_d5c4df32-1ca8-4497-8ccd-30b31f71f364/manager/0.log" Feb 23 11:28:54 crc kubenswrapper[4904]: I0223 11:28:54.696925 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-745bbbd77b-kxzq8_a28a8d14-5770-426d-b7ba-f2c89f1c5f3f/manager/0.log" Feb 23 11:28:56 crc kubenswrapper[4904]: I0223 11:28:56.248076 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w9mpb" podUID="02a1cf92-91ac-4198-8794-8f38a8d625bc" containerName="registry-server" containerID="cri-o://b9f1020454d96f1b272e328e12533504b1385d32d65f63083c627ac2acf208e5" gracePeriod=2 Feb 23 11:28:56 crc kubenswrapper[4904]: I0223 11:28:56.781100 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w9mpb" Feb 23 11:28:56 crc kubenswrapper[4904]: I0223 11:28:56.965076 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02a1cf92-91ac-4198-8794-8f38a8d625bc-catalog-content\") pod \"02a1cf92-91ac-4198-8794-8f38a8d625bc\" (UID: \"02a1cf92-91ac-4198-8794-8f38a8d625bc\") " Feb 23 11:28:56 crc kubenswrapper[4904]: I0223 11:28:56.965347 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqwzw\" (UniqueName: \"kubernetes.io/projected/02a1cf92-91ac-4198-8794-8f38a8d625bc-kube-api-access-zqwzw\") pod \"02a1cf92-91ac-4198-8794-8f38a8d625bc\" (UID: \"02a1cf92-91ac-4198-8794-8f38a8d625bc\") " Feb 23 11:28:56 crc kubenswrapper[4904]: I0223 11:28:56.965447 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02a1cf92-91ac-4198-8794-8f38a8d625bc-utilities\") pod \"02a1cf92-91ac-4198-8794-8f38a8d625bc\" (UID: \"02a1cf92-91ac-4198-8794-8f38a8d625bc\") " Feb 23 11:28:56 crc kubenswrapper[4904]: I0223 11:28:56.966399 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02a1cf92-91ac-4198-8794-8f38a8d625bc-utilities" (OuterVolumeSpecName: "utilities") pod "02a1cf92-91ac-4198-8794-8f38a8d625bc" (UID: "02a1cf92-91ac-4198-8794-8f38a8d625bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:28:56 crc kubenswrapper[4904]: I0223 11:28:56.992481 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02a1cf92-91ac-4198-8794-8f38a8d625bc-kube-api-access-zqwzw" (OuterVolumeSpecName: "kube-api-access-zqwzw") pod "02a1cf92-91ac-4198-8794-8f38a8d625bc" (UID: "02a1cf92-91ac-4198-8794-8f38a8d625bc"). InnerVolumeSpecName "kube-api-access-zqwzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 11:28:57 crc kubenswrapper[4904]: I0223 11:28:57.068666 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02a1cf92-91ac-4198-8794-8f38a8d625bc-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 11:28:57 crc kubenswrapper[4904]: I0223 11:28:57.068735 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqwzw\" (UniqueName: \"kubernetes.io/projected/02a1cf92-91ac-4198-8794-8f38a8d625bc-kube-api-access-zqwzw\") on node \"crc\" DevicePath \"\"" Feb 23 11:28:57 crc kubenswrapper[4904]: I0223 11:28:57.068840 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02a1cf92-91ac-4198-8794-8f38a8d625bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02a1cf92-91ac-4198-8794-8f38a8d625bc" (UID: "02a1cf92-91ac-4198-8794-8f38a8d625bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:28:57 crc kubenswrapper[4904]: I0223 11:28:57.169863 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02a1cf92-91ac-4198-8794-8f38a8d625bc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 11:28:57 crc kubenswrapper[4904]: I0223 11:28:57.291952 4904 generic.go:334] "Generic (PLEG): container finished" podID="02a1cf92-91ac-4198-8794-8f38a8d625bc" containerID="b9f1020454d96f1b272e328e12533504b1385d32d65f63083c627ac2acf208e5" exitCode=0 Feb 23 11:28:57 crc kubenswrapper[4904]: I0223 11:28:57.292381 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w9mpb" Feb 23 11:28:57 crc kubenswrapper[4904]: I0223 11:28:57.348136 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9mpb" event={"ID":"02a1cf92-91ac-4198-8794-8f38a8d625bc","Type":"ContainerDied","Data":"b9f1020454d96f1b272e328e12533504b1385d32d65f63083c627ac2acf208e5"} Feb 23 11:28:57 crc kubenswrapper[4904]: I0223 11:28:57.348190 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9mpb" event={"ID":"02a1cf92-91ac-4198-8794-8f38a8d625bc","Type":"ContainerDied","Data":"02b64e8fc99ec915f7e637ae681d0865b07d0b001982ea2efde4f3c37f3581eb"} Feb 23 11:28:57 crc kubenswrapper[4904]: I0223 11:28:57.348246 4904 scope.go:117] "RemoveContainer" containerID="b9f1020454d96f1b272e328e12533504b1385d32d65f63083c627ac2acf208e5" Feb 23 11:28:57 crc kubenswrapper[4904]: I0223 11:28:57.359941 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w9mpb"] Feb 23 11:28:57 crc kubenswrapper[4904]: I0223 11:28:57.369866 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w9mpb"] Feb 23 11:28:57 crc kubenswrapper[4904]: I0223 11:28:57.397184 4904 scope.go:117] "RemoveContainer" containerID="d54ca98e94dbfa2240490f5ba6bf5bd201a627eaa00d8ed1e3c5c28b75a5ada1" Feb 23 11:28:57 crc kubenswrapper[4904]: I0223 11:28:57.421299 4904 scope.go:117] "RemoveContainer" containerID="80033777d30de726c55b4eeaaf770bc0ae05b3af59ebb23aa1b04e544d850f8a" Feb 23 11:28:57 crc kubenswrapper[4904]: I0223 11:28:57.480517 4904 scope.go:117] "RemoveContainer" containerID="b9f1020454d96f1b272e328e12533504b1385d32d65f63083c627ac2acf208e5" Feb 23 11:28:57 crc kubenswrapper[4904]: E0223 11:28:57.480988 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9f1020454d96f1b272e328e12533504b1385d32d65f63083c627ac2acf208e5\": container with ID starting with b9f1020454d96f1b272e328e12533504b1385d32d65f63083c627ac2acf208e5 not found: ID does not exist" containerID="b9f1020454d96f1b272e328e12533504b1385d32d65f63083c627ac2acf208e5" Feb 23 11:28:57 crc kubenswrapper[4904]: I0223 11:28:57.481107 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9f1020454d96f1b272e328e12533504b1385d32d65f63083c627ac2acf208e5"} err="failed to get container status \"b9f1020454d96f1b272e328e12533504b1385d32d65f63083c627ac2acf208e5\": rpc error: code = NotFound desc = could not find container \"b9f1020454d96f1b272e328e12533504b1385d32d65f63083c627ac2acf208e5\": container with ID starting with b9f1020454d96f1b272e328e12533504b1385d32d65f63083c627ac2acf208e5 not found: ID does not exist" Feb 23 11:28:57 crc kubenswrapper[4904]: I0223 11:28:57.481191 4904 scope.go:117] "RemoveContainer" containerID="d54ca98e94dbfa2240490f5ba6bf5bd201a627eaa00d8ed1e3c5c28b75a5ada1" Feb 23 11:28:57 crc kubenswrapper[4904]: E0223 11:28:57.481774 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d54ca98e94dbfa2240490f5ba6bf5bd201a627eaa00d8ed1e3c5c28b75a5ada1\": container with ID starting with d54ca98e94dbfa2240490f5ba6bf5bd201a627eaa00d8ed1e3c5c28b75a5ada1 not found: ID does not exist" containerID="d54ca98e94dbfa2240490f5ba6bf5bd201a627eaa00d8ed1e3c5c28b75a5ada1" Feb 23 11:28:57 crc kubenswrapper[4904]: I0223 11:28:57.481819 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d54ca98e94dbfa2240490f5ba6bf5bd201a627eaa00d8ed1e3c5c28b75a5ada1"} err="failed to get container status \"d54ca98e94dbfa2240490f5ba6bf5bd201a627eaa00d8ed1e3c5c28b75a5ada1\": rpc error: code = NotFound desc = could not find container \"d54ca98e94dbfa2240490f5ba6bf5bd201a627eaa00d8ed1e3c5c28b75a5ada1\": container with ID starting with d54ca98e94dbfa2240490f5ba6bf5bd201a627eaa00d8ed1e3c5c28b75a5ada1 not found: ID does not exist" Feb 23 11:28:57 crc kubenswrapper[4904]: I0223 11:28:57.481848 4904 scope.go:117] "RemoveContainer" containerID="80033777d30de726c55b4eeaaf770bc0ae05b3af59ebb23aa1b04e544d850f8a" Feb 23 11:28:57 crc kubenswrapper[4904]: E0223 11:28:57.482169 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80033777d30de726c55b4eeaaf770bc0ae05b3af59ebb23aa1b04e544d850f8a\": container with ID starting with 80033777d30de726c55b4eeaaf770bc0ae05b3af59ebb23aa1b04e544d850f8a not found: ID does not exist" containerID="80033777d30de726c55b4eeaaf770bc0ae05b3af59ebb23aa1b04e544d850f8a" Feb 23 11:28:57 crc kubenswrapper[4904]: I0223 11:28:57.482245 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80033777d30de726c55b4eeaaf770bc0ae05b3af59ebb23aa1b04e544d850f8a"} err="failed to get container status \"80033777d30de726c55b4eeaaf770bc0ae05b3af59ebb23aa1b04e544d850f8a\": rpc error: code = NotFound desc = could not find container \"80033777d30de726c55b4eeaaf770bc0ae05b3af59ebb23aa1b04e544d850f8a\": container with ID starting with 80033777d30de726c55b4eeaaf770bc0ae05b3af59ebb23aa1b04e544d850f8a not found: ID does not exist" Feb 23 11:28:59 crc kubenswrapper[4904]: I0223 11:28:59.269273 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02a1cf92-91ac-4198-8794-8f38a8d625bc" path="/var/lib/kubelet/pods/02a1cf92-91ac-4198-8794-8f38a8d625bc/volumes" Feb 23 11:29:00 crc kubenswrapper[4904]: I0223 11:29:00.523834 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-c4b7d6946-5btm2_655fcd29-393f-400c-99a7-01cd2f54f6e8/manager/0.log" Feb 23 11:29:17 crc kubenswrapper[4904]: I0223 11:29:17.398393 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 11:29:17 crc kubenswrapper[4904]: I0223 11:29:17.399389 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 11:29:19 crc kubenswrapper[4904]: I0223 11:29:19.369866 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-rrs2w_14a8ee7a-0483-4a58-a9e4-7b26248e998b/control-plane-machine-set-operator/0.log" Feb 23 11:29:19 crc kubenswrapper[4904]: I0223 11:29:19.817588 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bm6q7_fb367911-0f91-48b5-badb-1338ea2de5c1/machine-api-operator/0.log" Feb 23 11:29:19 crc kubenswrapper[4904]: I0223 11:29:19.864021 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bm6q7_fb367911-0f91-48b5-badb-1338ea2de5c1/kube-rbac-proxy/0.log" Feb 23 11:29:31 crc kubenswrapper[4904]: I0223 11:29:31.474802 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-72jp5"] Feb 23 11:29:31 crc kubenswrapper[4904]: E0223 11:29:31.475819 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a1cf92-91ac-4198-8794-8f38a8d625bc" containerName="registry-server" Feb 23 11:29:31 crc kubenswrapper[4904]: I0223 11:29:31.475840 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a1cf92-91ac-4198-8794-8f38a8d625bc" containerName="registry-server" Feb 23 11:29:31 crc kubenswrapper[4904]: E0223 11:29:31.475859 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a1cf92-91ac-4198-8794-8f38a8d625bc" containerName="extract-content" Feb 23 11:29:31 crc kubenswrapper[4904]: I0223 11:29:31.475869 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a1cf92-91ac-4198-8794-8f38a8d625bc" containerName="extract-content" Feb 23 11:29:31 crc kubenswrapper[4904]: E0223 11:29:31.475923 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a1cf92-91ac-4198-8794-8f38a8d625bc" containerName="extract-utilities" Feb 23 11:29:31 crc kubenswrapper[4904]: I0223 11:29:31.475934 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a1cf92-91ac-4198-8794-8f38a8d625bc" containerName="extract-utilities" Feb 23 11:29:31 crc kubenswrapper[4904]: I0223 11:29:31.476189 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="02a1cf92-91ac-4198-8794-8f38a8d625bc" containerName="registry-server" Feb 23 11:29:31 crc kubenswrapper[4904]: I0223 11:29:31.477983 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-72jp5" Feb 23 11:29:31 crc kubenswrapper[4904]: I0223 11:29:31.505771 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-72jp5"] Feb 23 11:29:31 crc kubenswrapper[4904]: I0223 11:29:31.656120 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l5j4\" (UniqueName: \"kubernetes.io/projected/423ef0c3-d84a-4978-ad6f-220655aec7d2-kube-api-access-5l5j4\") pod \"redhat-operators-72jp5\" (UID: \"423ef0c3-d84a-4978-ad6f-220655aec7d2\") " pod="openshift-marketplace/redhat-operators-72jp5" Feb 23 11:29:31 crc kubenswrapper[4904]: I0223 11:29:31.656233 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/423ef0c3-d84a-4978-ad6f-220655aec7d2-catalog-content\") pod \"redhat-operators-72jp5\" (UID: \"423ef0c3-d84a-4978-ad6f-220655aec7d2\") " pod="openshift-marketplace/redhat-operators-72jp5" Feb 23 11:29:31 crc kubenswrapper[4904]: I0223 11:29:31.656273 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/423ef0c3-d84a-4978-ad6f-220655aec7d2-utilities\") pod \"redhat-operators-72jp5\" (UID: \"423ef0c3-d84a-4978-ad6f-220655aec7d2\") " pod="openshift-marketplace/redhat-operators-72jp5" Feb 23 11:29:31 crc kubenswrapper[4904]: I0223 11:29:31.758761 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l5j4\" (UniqueName: \"kubernetes.io/projected/423ef0c3-d84a-4978-ad6f-220655aec7d2-kube-api-access-5l5j4\") pod \"redhat-operators-72jp5\" (UID: \"423ef0c3-d84a-4978-ad6f-220655aec7d2\") " pod="openshift-marketplace/redhat-operators-72jp5" Feb 23 11:29:31 crc kubenswrapper[4904]: I0223 11:29:31.758914 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/423ef0c3-d84a-4978-ad6f-220655aec7d2-catalog-content\") pod \"redhat-operators-72jp5\" (UID: \"423ef0c3-d84a-4978-ad6f-220655aec7d2\") " pod="openshift-marketplace/redhat-operators-72jp5" Feb 23 11:29:31 crc kubenswrapper[4904]: I0223 11:29:31.758970 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/423ef0c3-d84a-4978-ad6f-220655aec7d2-utilities\") pod \"redhat-operators-72jp5\" (UID: \"423ef0c3-d84a-4978-ad6f-220655aec7d2\") " pod="openshift-marketplace/redhat-operators-72jp5" Feb 23 11:29:31 crc kubenswrapper[4904]: I0223 11:29:31.759489 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/423ef0c3-d84a-4978-ad6f-220655aec7d2-utilities\") pod \"redhat-operators-72jp5\" (UID: \"423ef0c3-d84a-4978-ad6f-220655aec7d2\") " pod="openshift-marketplace/redhat-operators-72jp5" Feb 23 11:29:31 crc kubenswrapper[4904]: I0223 11:29:31.759500 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/423ef0c3-d84a-4978-ad6f-220655aec7d2-catalog-content\") pod \"redhat-operators-72jp5\" (UID: \"423ef0c3-d84a-4978-ad6f-220655aec7d2\") " pod="openshift-marketplace/redhat-operators-72jp5" Feb 23 11:29:31 crc kubenswrapper[4904]: I0223 11:29:31.778342 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l5j4\" (UniqueName: \"kubernetes.io/projected/423ef0c3-d84a-4978-ad6f-220655aec7d2-kube-api-access-5l5j4\") pod \"redhat-operators-72jp5\" (UID: \"423ef0c3-d84a-4978-ad6f-220655aec7d2\") " pod="openshift-marketplace/redhat-operators-72jp5" Feb 23 11:29:31 crc kubenswrapper[4904]: I0223 11:29:31.820740 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-72jp5" Feb 23 11:29:32 crc kubenswrapper[4904]: I0223 11:29:32.389302 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-72jp5"] Feb 23 11:29:32 crc kubenswrapper[4904]: I0223 11:29:32.615525 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72jp5" event={"ID":"423ef0c3-d84a-4978-ad6f-220655aec7d2","Type":"ContainerStarted","Data":"e3cab7c6191c411a21a4301336f9db21c88a090b46e8c6e7dc9ddf63c0626fde"} Feb 23 11:29:32 crc kubenswrapper[4904]: I0223 11:29:32.615566 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72jp5" event={"ID":"423ef0c3-d84a-4978-ad6f-220655aec7d2","Type":"ContainerStarted","Data":"3e953876f898c0d9b6e40b2b13ec55126a31fb5d3b8c61d4f4efe40d0f5a96ec"} Feb 23 11:29:33 crc kubenswrapper[4904]: I0223 11:29:33.626579 4904 generic.go:334] "Generic (PLEG): container finished" podID="423ef0c3-d84a-4978-ad6f-220655aec7d2" containerID="e3cab7c6191c411a21a4301336f9db21c88a090b46e8c6e7dc9ddf63c0626fde" exitCode=0 Feb 23 11:29:33 crc kubenswrapper[4904]: I0223 11:29:33.626664 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72jp5" event={"ID":"423ef0c3-d84a-4978-ad6f-220655aec7d2","Type":"ContainerDied","Data":"e3cab7c6191c411a21a4301336f9db21c88a090b46e8c6e7dc9ddf63c0626fde"} Feb 23 11:29:34 crc kubenswrapper[4904]: I0223 11:29:34.224221 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-kmtx9_5f2bc9ed-640c-4d77-b4c1-996b8adc337f/cert-manager-controller/0.log" Feb 23 11:29:34 crc kubenswrapper[4904]: I0223 11:29:34.389777 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-7f5jm_984f83b9-1f07-4198-af7f-c93cdb296e75/cert-manager-cainjector/0.log" Feb 23 11:29:34 crc kubenswrapper[4904]: I0223 11:29:34.635658 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72jp5" event={"ID":"423ef0c3-d84a-4978-ad6f-220655aec7d2","Type":"ContainerStarted","Data":"35be1f06c5c3042b541a4f4221204b258f715aedeff340242e08f0a8c202262f"} Feb 23 11:29:34 crc kubenswrapper[4904]: I0223 11:29:34.655688 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-dr7zn_048cc369-9691-4a4c-9140-8ab1aa5e1cca/cert-manager-webhook/0.log" Feb 23 11:29:39 crc kubenswrapper[4904]: I0223 11:29:39.682842 4904 generic.go:334] "Generic (PLEG): container finished" podID="423ef0c3-d84a-4978-ad6f-220655aec7d2" containerID="35be1f06c5c3042b541a4f4221204b258f715aedeff340242e08f0a8c202262f" exitCode=0 Feb 23 11:29:39 crc kubenswrapper[4904]: I0223 11:29:39.682917 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72jp5" event={"ID":"423ef0c3-d84a-4978-ad6f-220655aec7d2","Type":"ContainerDied","Data":"35be1f06c5c3042b541a4f4221204b258f715aedeff340242e08f0a8c202262f"} Feb 23 11:29:40 crc kubenswrapper[4904]: I0223 11:29:40.695472 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72jp5" event={"ID":"423ef0c3-d84a-4978-ad6f-220655aec7d2","Type":"ContainerStarted","Data":"7f67eae9544d46c7915a8e9c6de2c3f9110954ef3eb8cb03f0d4c37df84266c8"} Feb 23 11:29:40 crc kubenswrapper[4904]: I0223 11:29:40.730668 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-72jp5" podStartSLOduration=3.311756795 podStartE2EDuration="9.730646508s" podCreationTimestamp="2026-02-23 11:29:31 +0000 UTC" firstStartedPulling="2026-02-23 11:29:33.629216795 +0000 UTC m=+5007.049590298" lastFinishedPulling="2026-02-23 11:29:40.048106498 +0000 UTC m=+5013.468480011" observedRunningTime="2026-02-23 11:29:40.726241573 +0000 UTC m=+5014.146615086" watchObservedRunningTime="2026-02-23 11:29:40.730646508 +0000 UTC m=+5014.151020021" Feb 23 11:29:41 crc kubenswrapper[4904]: I0223 11:29:41.821604 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-72jp5" Feb 23 11:29:41 crc kubenswrapper[4904]: I0223 11:29:41.821663 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-72jp5" Feb 23 11:29:42 crc kubenswrapper[4904]: I0223 11:29:42.894940 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-72jp5" podUID="423ef0c3-d84a-4978-ad6f-220655aec7d2" containerName="registry-server" probeResult="failure" output=< Feb 23 11:29:42 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Feb 23 11:29:42 crc kubenswrapper[4904]: > Feb 23 11:29:47 crc kubenswrapper[4904]: I0223 11:29:47.398314 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 11:29:47 crc kubenswrapper[4904]: I0223 11:29:47.398959 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 11:29:51 crc kubenswrapper[4904]: I0223 11:29:51.037267 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-2d2cz_b7287f83-1d7a-4d2c-a330-36c3ab421222/nmstate-console-plugin/0.log" Feb 23 11:29:51 crc kubenswrapper[4904]: I0223 11:29:51.826640 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-m9gsm_505d4d9b-90d8-40c4-ac88-7ebe41db94ed/nmstate-metrics/0.log" Feb 23 11:29:51 crc kubenswrapper[4904]: I0223 11:29:51.833566 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-m9gsm_505d4d9b-90d8-40c4-ac88-7ebe41db94ed/kube-rbac-proxy/0.log" Feb 23 11:29:51 crc kubenswrapper[4904]: I0223 11:29:51.836418 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-hrczn_103dbc2d-351d-479d-91c4-59c5a650c8e5/nmstate-handler/0.log" Feb 23 11:29:51 crc kubenswrapper[4904]: I0223 11:29:51.874210 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-72jp5" Feb 23 11:29:51 crc kubenswrapper[4904]: I0223 11:29:51.924963 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-72jp5" Feb 23 11:29:52 crc kubenswrapper[4904]: I0223 11:29:52.021943 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-x5kct_eb53b4a6-a446-46c7-b022-0079015db963/nmstate-operator/0.log" Feb 23 11:29:52 crc kubenswrapper[4904]: I0223 11:29:52.067167 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-nw5sw_f927138f-bb89-4c3a-a543-d5256a3cba5c/nmstate-webhook/0.log" Feb 23 11:29:52 crc kubenswrapper[4904]: I0223 11:29:52.121753 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-72jp5"] Feb 23 11:29:53 crc kubenswrapper[4904]: I0223 11:29:53.812475 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-72jp5" podUID="423ef0c3-d84a-4978-ad6f-220655aec7d2" containerName="registry-server" containerID="cri-o://7f67eae9544d46c7915a8e9c6de2c3f9110954ef3eb8cb03f0d4c37df84266c8" gracePeriod=2 Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.321938 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-72jp5" Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.499717 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/423ef0c3-d84a-4978-ad6f-220655aec7d2-utilities\") pod \"423ef0c3-d84a-4978-ad6f-220655aec7d2\" (UID: \"423ef0c3-d84a-4978-ad6f-220655aec7d2\") " Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.500171 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/423ef0c3-d84a-4978-ad6f-220655aec7d2-catalog-content\") pod \"423ef0c3-d84a-4978-ad6f-220655aec7d2\" (UID: \"423ef0c3-d84a-4978-ad6f-220655aec7d2\") " Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.500275 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l5j4\" (UniqueName: \"kubernetes.io/projected/423ef0c3-d84a-4978-ad6f-220655aec7d2-kube-api-access-5l5j4\") pod \"423ef0c3-d84a-4978-ad6f-220655aec7d2\" (UID: \"423ef0c3-d84a-4978-ad6f-220655aec7d2\") " Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.500627 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/423ef0c3-d84a-4978-ad6f-220655aec7d2-utilities" (OuterVolumeSpecName: "utilities") pod "423ef0c3-d84a-4978-ad6f-220655aec7d2" (UID: "423ef0c3-d84a-4978-ad6f-220655aec7d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.501598 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/423ef0c3-d84a-4978-ad6f-220655aec7d2-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.505256 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/423ef0c3-d84a-4978-ad6f-220655aec7d2-kube-api-access-5l5j4" (OuterVolumeSpecName: "kube-api-access-5l5j4") pod "423ef0c3-d84a-4978-ad6f-220655aec7d2" (UID: "423ef0c3-d84a-4978-ad6f-220655aec7d2"). InnerVolumeSpecName "kube-api-access-5l5j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.603405 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l5j4\" (UniqueName: \"kubernetes.io/projected/423ef0c3-d84a-4978-ad6f-220655aec7d2-kube-api-access-5l5j4\") on node \"crc\" DevicePath \"\"" Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.629466 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/423ef0c3-d84a-4978-ad6f-220655aec7d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "423ef0c3-d84a-4978-ad6f-220655aec7d2" (UID: "423ef0c3-d84a-4978-ad6f-220655aec7d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.705270 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/423ef0c3-d84a-4978-ad6f-220655aec7d2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.823566 4904 generic.go:334] "Generic (PLEG): container finished" podID="423ef0c3-d84a-4978-ad6f-220655aec7d2" containerID="7f67eae9544d46c7915a8e9c6de2c3f9110954ef3eb8cb03f0d4c37df84266c8" exitCode=0 Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.823603 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72jp5" event={"ID":"423ef0c3-d84a-4978-ad6f-220655aec7d2","Type":"ContainerDied","Data":"7f67eae9544d46c7915a8e9c6de2c3f9110954ef3eb8cb03f0d4c37df84266c8"} Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.823628 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72jp5" event={"ID":"423ef0c3-d84a-4978-ad6f-220655aec7d2","Type":"ContainerDied","Data":"3e953876f898c0d9b6e40b2b13ec55126a31fb5d3b8c61d4f4efe40d0f5a96ec"} Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.823643 4904 scope.go:117] "RemoveContainer" containerID="7f67eae9544d46c7915a8e9c6de2c3f9110954ef3eb8cb03f0d4c37df84266c8" Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.823763 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-72jp5" Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.865280 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-72jp5"] Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.872718 4904 scope.go:117] "RemoveContainer" containerID="35be1f06c5c3042b541a4f4221204b258f715aedeff340242e08f0a8c202262f" Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.873577 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-72jp5"] Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.901852 4904 scope.go:117] "RemoveContainer" containerID="e3cab7c6191c411a21a4301336f9db21c88a090b46e8c6e7dc9ddf63c0626fde" Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.964674 4904 scope.go:117] "RemoveContainer" containerID="7f67eae9544d46c7915a8e9c6de2c3f9110954ef3eb8cb03f0d4c37df84266c8" Feb 23 11:29:54 crc kubenswrapper[4904]: E0223 11:29:54.965350 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f67eae9544d46c7915a8e9c6de2c3f9110954ef3eb8cb03f0d4c37df84266c8\": container with ID starting with 7f67eae9544d46c7915a8e9c6de2c3f9110954ef3eb8cb03f0d4c37df84266c8 not found: ID does not exist" containerID="7f67eae9544d46c7915a8e9c6de2c3f9110954ef3eb8cb03f0d4c37df84266c8" Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.965493 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f67eae9544d46c7915a8e9c6de2c3f9110954ef3eb8cb03f0d4c37df84266c8"} err="failed to get container status \"7f67eae9544d46c7915a8e9c6de2c3f9110954ef3eb8cb03f0d4c37df84266c8\": rpc error: code = NotFound desc = could not find container \"7f67eae9544d46c7915a8e9c6de2c3f9110954ef3eb8cb03f0d4c37df84266c8\": container with ID starting with 7f67eae9544d46c7915a8e9c6de2c3f9110954ef3eb8cb03f0d4c37df84266c8 not found: ID does not exist" Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.965607 4904 scope.go:117] "RemoveContainer" containerID="35be1f06c5c3042b541a4f4221204b258f715aedeff340242e08f0a8c202262f" Feb 23 11:29:54 crc kubenswrapper[4904]: E0223 11:29:54.966185 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35be1f06c5c3042b541a4f4221204b258f715aedeff340242e08f0a8c202262f\": container with ID starting with 35be1f06c5c3042b541a4f4221204b258f715aedeff340242e08f0a8c202262f not found: ID does not exist" containerID="35be1f06c5c3042b541a4f4221204b258f715aedeff340242e08f0a8c202262f" Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.966251 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35be1f06c5c3042b541a4f4221204b258f715aedeff340242e08f0a8c202262f"} err="failed to get container status \"35be1f06c5c3042b541a4f4221204b258f715aedeff340242e08f0a8c202262f\": rpc error: code = NotFound desc = could not find container \"35be1f06c5c3042b541a4f4221204b258f715aedeff340242e08f0a8c202262f\": container with ID starting with 35be1f06c5c3042b541a4f4221204b258f715aedeff340242e08f0a8c202262f not found: ID does not exist" Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.966293 4904 scope.go:117] "RemoveContainer" containerID="e3cab7c6191c411a21a4301336f9db21c88a090b46e8c6e7dc9ddf63c0626fde" Feb 23 11:29:54 crc kubenswrapper[4904]: E0223 11:29:54.966781 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3cab7c6191c411a21a4301336f9db21c88a090b46e8c6e7dc9ddf63c0626fde\": container with ID starting with e3cab7c6191c411a21a4301336f9db21c88a090b46e8c6e7dc9ddf63c0626fde not found: ID does not exist" containerID="e3cab7c6191c411a21a4301336f9db21c88a090b46e8c6e7dc9ddf63c0626fde" Feb 23 11:29:54 crc kubenswrapper[4904]: I0223 11:29:54.966899 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3cab7c6191c411a21a4301336f9db21c88a090b46e8c6e7dc9ddf63c0626fde"} err="failed to get container status \"e3cab7c6191c411a21a4301336f9db21c88a090b46e8c6e7dc9ddf63c0626fde\": rpc error: code = NotFound desc = could not find container \"e3cab7c6191c411a21a4301336f9db21c88a090b46e8c6e7dc9ddf63c0626fde\": container with ID starting with e3cab7c6191c411a21a4301336f9db21c88a090b46e8c6e7dc9ddf63c0626fde not found: ID does not exist" Feb 23 11:29:55 crc kubenswrapper[4904]: I0223 11:29:55.268836 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="423ef0c3-d84a-4978-ad6f-220655aec7d2" path="/var/lib/kubelet/pods/423ef0c3-d84a-4978-ad6f-220655aec7d2/volumes" Feb 23 11:30:00 crc kubenswrapper[4904]: I0223 11:30:00.190270 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530770-2dpft"] Feb 23 11:30:00 crc kubenswrapper[4904]: E0223 11:30:00.191712 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="423ef0c3-d84a-4978-ad6f-220655aec7d2" containerName="registry-server" Feb 23 11:30:00 crc kubenswrapper[4904]: I0223 11:30:00.191826 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="423ef0c3-d84a-4978-ad6f-220655aec7d2" containerName="registry-server" Feb 23 11:30:00 crc kubenswrapper[4904]: E0223 11:30:00.191886 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="423ef0c3-d84a-4978-ad6f-220655aec7d2" containerName="extract-content" Feb 23 11:30:00 crc kubenswrapper[4904]: I0223 11:30:00.191927 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="423ef0c3-d84a-4978-ad6f-220655aec7d2" containerName="extract-content" Feb 23 11:30:00 crc kubenswrapper[4904]: E0223 11:30:00.191973 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="423ef0c3-d84a-4978-ad6f-220655aec7d2" containerName="extract-utilities" Feb 23 11:30:00 crc kubenswrapper[4904]: I0223 11:30:00.191985 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="423ef0c3-d84a-4978-ad6f-220655aec7d2" containerName="extract-utilities" Feb 23 11:30:00 crc kubenswrapper[4904]: I0223 11:30:00.192347 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="423ef0c3-d84a-4978-ad6f-220655aec7d2" containerName="registry-server" Feb 23 11:30:00 crc kubenswrapper[4904]: I0223 11:30:00.193460 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530770-2dpft" Feb 23 11:30:00 crc kubenswrapper[4904]: I0223 11:30:00.196660 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 11:30:00 crc kubenswrapper[4904]: I0223 11:30:00.197260 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 11:30:00 crc kubenswrapper[4904]: I0223 11:30:00.223743 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530770-2dpft"] Feb 23 11:30:00 crc kubenswrapper[4904]: I0223 11:30:00.238433 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6g7r\" (UniqueName: \"kubernetes.io/projected/75e065d9-11cf-4048-aafd-05b4bd2c1a52-kube-api-access-w6g7r\") pod \"collect-profiles-29530770-2dpft\" (UID: \"75e065d9-11cf-4048-aafd-05b4bd2c1a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530770-2dpft" Feb 23 11:30:00 crc kubenswrapper[4904]: I0223 11:30:00.238492 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75e065d9-11cf-4048-aafd-05b4bd2c1a52-secret-volume\") pod \"collect-profiles-29530770-2dpft\" (UID: \"75e065d9-11cf-4048-aafd-05b4bd2c1a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530770-2dpft" Feb 23 11:30:00 crc kubenswrapper[4904]: I0223 11:30:00.238535 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75e065d9-11cf-4048-aafd-05b4bd2c1a52-config-volume\") pod \"collect-profiles-29530770-2dpft\" (UID: \"75e065d9-11cf-4048-aafd-05b4bd2c1a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530770-2dpft" Feb 23 11:30:00 crc kubenswrapper[4904]: I0223 11:30:00.339828 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75e065d9-11cf-4048-aafd-05b4bd2c1a52-secret-volume\") pod \"collect-profiles-29530770-2dpft\" (UID: \"75e065d9-11cf-4048-aafd-05b4bd2c1a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530770-2dpft" Feb 23 11:30:00 crc kubenswrapper[4904]: I0223 11:30:00.339930 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75e065d9-11cf-4048-aafd-05b4bd2c1a52-config-volume\") pod \"collect-profiles-29530770-2dpft\" (UID: \"75e065d9-11cf-4048-aafd-05b4bd2c1a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530770-2dpft" Feb 23 11:30:00 crc kubenswrapper[4904]: I0223 11:30:00.340157 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6g7r\" (UniqueName: \"kubernetes.io/projected/75e065d9-11cf-4048-aafd-05b4bd2c1a52-kube-api-access-w6g7r\") pod \"collect-profiles-29530770-2dpft\" (UID: \"75e065d9-11cf-4048-aafd-05b4bd2c1a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530770-2dpft" Feb 23 11:30:00 crc kubenswrapper[4904]: I0223 11:30:00.340909 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75e065d9-11cf-4048-aafd-05b4bd2c1a52-config-volume\") pod \"collect-profiles-29530770-2dpft\" (UID: \"75e065d9-11cf-4048-aafd-05b4bd2c1a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530770-2dpft" Feb 23 11:30:00 crc kubenswrapper[4904]: I0223 11:30:00.347183 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75e065d9-11cf-4048-aafd-05b4bd2c1a52-secret-volume\") pod \"collect-profiles-29530770-2dpft\" (UID: \"75e065d9-11cf-4048-aafd-05b4bd2c1a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530770-2dpft" Feb 23 11:30:00 crc kubenswrapper[4904]: I0223 11:30:00.359976 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6g7r\" (UniqueName: \"kubernetes.io/projected/75e065d9-11cf-4048-aafd-05b4bd2c1a52-kube-api-access-w6g7r\") pod \"collect-profiles-29530770-2dpft\" (UID: \"75e065d9-11cf-4048-aafd-05b4bd2c1a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530770-2dpft" Feb 23 11:30:00 crc kubenswrapper[4904]: I0223 11:30:00.524254 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530770-2dpft" Feb 23 11:30:01 crc kubenswrapper[4904]: I0223 11:30:01.004512 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530770-2dpft"] Feb 23 11:30:01 crc kubenswrapper[4904]: I0223 11:30:01.891177 4904 generic.go:334] "Generic (PLEG): container finished" podID="75e065d9-11cf-4048-aafd-05b4bd2c1a52" containerID="95ffd28252e99cf12d54f80f861c2acb641027a6bddaacc6fe247236a706c472" exitCode=0 Feb 23 11:30:01 crc kubenswrapper[4904]: I0223 11:30:01.891247 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530770-2dpft" event={"ID":"75e065d9-11cf-4048-aafd-05b4bd2c1a52","Type":"ContainerDied","Data":"95ffd28252e99cf12d54f80f861c2acb641027a6bddaacc6fe247236a706c472"} Feb 23 11:30:01 crc kubenswrapper[4904]: I0223 11:30:01.891455 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530770-2dpft" event={"ID":"75e065d9-11cf-4048-aafd-05b4bd2c1a52","Type":"ContainerStarted","Data":"816cfba36ace91489c9be8f68d69b00d22530f4e5936e6f5ef06237551816ef2"} Feb 23 11:30:03 crc kubenswrapper[4904]: I0223 11:30:03.280784 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530770-2dpft" Feb 23 11:30:03 crc kubenswrapper[4904]: I0223 11:30:03.300562 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75e065d9-11cf-4048-aafd-05b4bd2c1a52-secret-volume\") pod \"75e065d9-11cf-4048-aafd-05b4bd2c1a52\" (UID: \"75e065d9-11cf-4048-aafd-05b4bd2c1a52\") " Feb 23 11:30:03 crc kubenswrapper[4904]: I0223 11:30:03.300659 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6g7r\" (UniqueName: \"kubernetes.io/projected/75e065d9-11cf-4048-aafd-05b4bd2c1a52-kube-api-access-w6g7r\") pod \"75e065d9-11cf-4048-aafd-05b4bd2c1a52\" (UID: \"75e065d9-11cf-4048-aafd-05b4bd2c1a52\") " Feb 23 11:30:03 crc kubenswrapper[4904]: I0223 11:30:03.300897 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75e065d9-11cf-4048-aafd-05b4bd2c1a52-config-volume\") pod \"75e065d9-11cf-4048-aafd-05b4bd2c1a52\" (UID: \"75e065d9-11cf-4048-aafd-05b4bd2c1a52\") " Feb 23 11:30:03 crc kubenswrapper[4904]: I0223 11:30:03.302976 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75e065d9-11cf-4048-aafd-05b4bd2c1a52-config-volume" (OuterVolumeSpecName: "config-volume") pod "75e065d9-11cf-4048-aafd-05b4bd2c1a52" (UID: "75e065d9-11cf-4048-aafd-05b4bd2c1a52"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 11:30:03 crc kubenswrapper[4904]: I0223 11:30:03.308945 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75e065d9-11cf-4048-aafd-05b4bd2c1a52-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "75e065d9-11cf-4048-aafd-05b4bd2c1a52" (UID: "75e065d9-11cf-4048-aafd-05b4bd2c1a52"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 11:30:03 crc kubenswrapper[4904]: I0223 11:30:03.309125 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75e065d9-11cf-4048-aafd-05b4bd2c1a52-kube-api-access-w6g7r" (OuterVolumeSpecName: "kube-api-access-w6g7r") pod "75e065d9-11cf-4048-aafd-05b4bd2c1a52" (UID: "75e065d9-11cf-4048-aafd-05b4bd2c1a52"). InnerVolumeSpecName "kube-api-access-w6g7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 11:30:03 crc kubenswrapper[4904]: I0223 11:30:03.403467 4904 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75e065d9-11cf-4048-aafd-05b4bd2c1a52-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 11:30:03 crc kubenswrapper[4904]: I0223 11:30:03.403509 4904 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75e065d9-11cf-4048-aafd-05b4bd2c1a52-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 11:30:03 crc kubenswrapper[4904]: I0223 11:30:03.403542 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6g7r\" (UniqueName: \"kubernetes.io/projected/75e065d9-11cf-4048-aafd-05b4bd2c1a52-kube-api-access-w6g7r\") on node \"crc\" DevicePath \"\"" Feb 23 11:30:03 crc kubenswrapper[4904]: I0223 11:30:03.914415 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530770-2dpft" event={"ID":"75e065d9-11cf-4048-aafd-05b4bd2c1a52","Type":"ContainerDied","Data":"816cfba36ace91489c9be8f68d69b00d22530f4e5936e6f5ef06237551816ef2"} Feb 23 11:30:03 crc kubenswrapper[4904]: I0223 11:30:03.914809 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="816cfba36ace91489c9be8f68d69b00d22530f4e5936e6f5ef06237551816ef2" Feb 23 11:30:03 crc kubenswrapper[4904]: I0223 11:30:03.914503 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530770-2dpft" Feb 23 11:30:04 crc kubenswrapper[4904]: I0223 11:30:04.381713 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530725-rlc5q"] Feb 23 11:30:04 crc kubenswrapper[4904]: I0223 11:30:04.392819 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530725-rlc5q"] Feb 23 11:30:05 crc kubenswrapper[4904]: I0223 11:30:05.268942 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59020053-fe67-410f-9d41-08609943074a" path="/var/lib/kubelet/pods/59020053-fe67-410f-9d41-08609943074a/volumes" Feb 23 11:30:06 crc kubenswrapper[4904]: I0223 11:30:06.597813 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-mzcx5_0575e4d3-040f-492f-92ff-ea6a433ce2a2/prometheus-operator/0.log" Feb 23 11:30:06 crc kubenswrapper[4904]: I0223 11:30:06.746769 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn_82047769-4adf-4b19-bd85-04bd9c681616/prometheus-operator-admission-webhook/0.log" Feb 23 11:30:06 crc kubenswrapper[4904]: I0223 11:30:06.818583 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln_8341f5fc-eb1e-4d05-935d-653320cfac4a/prometheus-operator-admission-webhook/0.log" Feb 23 11:30:06 crc kubenswrapper[4904]: I0223 11:30:06.935959 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-mrt8q_6b2753c5-297c-45a4-be5d-66e00f49448e/operator/0.log" Feb 23 11:30:07 crc kubenswrapper[4904]: I0223 11:30:07.006728 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-6dm8x_e04b69a5-a057-4e3d-81bf-509d8cce4ec4/perses-operator/0.log" Feb 23 11:30:17 crc kubenswrapper[4904]: I0223 11:30:17.398024 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 11:30:17 crc kubenswrapper[4904]: I0223 11:30:17.398640 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 11:30:17 crc kubenswrapper[4904]: I0223 11:30:17.398698 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" Feb 23 11:30:17 crc kubenswrapper[4904]: I0223 11:30:17.399691 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6"} pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 11:30:17 crc kubenswrapper[4904]: I0223 11:30:17.399923 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" containerID="cri-o://7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" gracePeriod=600 Feb 23 11:30:17 crc kubenswrapper[4904]: E0223 11:30:17.538025 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:30:18 crc kubenswrapper[4904]: I0223 11:30:18.047502 4904 generic.go:334] "Generic (PLEG): container finished" podID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" exitCode=0 Feb 23 11:30:18 crc kubenswrapper[4904]: I0223 11:30:18.047579 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerDied","Data":"7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6"} Feb 23 11:30:18 crc kubenswrapper[4904]: I0223 11:30:18.047638 4904 scope.go:117] "RemoveContainer" containerID="2cd8fbdbd2b09a00e00db2bbd257852b69be64993b2249f7cfb8bc7eb06ffc3e" Feb 23 11:30:18 crc kubenswrapper[4904]: I0223 11:30:18.048500 4904 scope.go:117] "RemoveContainer" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" Feb 23 11:30:18 crc kubenswrapper[4904]: E0223 11:30:18.048965 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:30:21 crc kubenswrapper[4904]: I0223 11:30:21.373655 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-mrscb_a4dbb64a-0dcd-47cd-bc72-8c9acb096464/kube-rbac-proxy/0.log" Feb 23 11:30:21 crc kubenswrapper[4904]: I0223 11:30:21.540592 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-mrscb_a4dbb64a-0dcd-47cd-bc72-8c9acb096464/controller/0.log" Feb 23 11:30:21 crc kubenswrapper[4904]: I0223 11:30:21.573485 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t48sj_411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2/cp-frr-files/0.log" Feb 23 11:30:21 crc kubenswrapper[4904]: I0223 11:30:21.765410 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t48sj_411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2/cp-reloader/0.log" Feb 23 11:30:21 crc kubenswrapper[4904]: I0223 11:30:21.780506 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t48sj_411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2/cp-reloader/0.log" Feb 23 11:30:21 crc kubenswrapper[4904]: I0223 11:30:21.806603 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t48sj_411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2/cp-frr-files/0.log" Feb 23 11:30:21 crc kubenswrapper[4904]: I0223 11:30:21.808658 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t48sj_411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2/cp-metrics/0.log" Feb 23 11:30:21 crc kubenswrapper[4904]: I0223 11:30:21.949365 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t48sj_411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2/cp-frr-files/0.log" Feb 23 11:30:21 crc kubenswrapper[4904]: I0223 11:30:21.957412 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t48sj_411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2/cp-reloader/0.log" Feb 23 11:30:21 crc kubenswrapper[4904]: I0223 11:30:21.993321 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t48sj_411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2/cp-metrics/0.log" Feb 23 11:30:22 crc kubenswrapper[4904]: I0223 11:30:22.000781 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t48sj_411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2/cp-metrics/0.log" Feb 23 11:30:22 crc kubenswrapper[4904]: I0223 11:30:22.175437 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t48sj_411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2/cp-frr-files/0.log" Feb 23 11:30:22 crc kubenswrapper[4904]: I0223 11:30:22.187965 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t48sj_411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2/controller/0.log" Feb 23 11:30:22 crc kubenswrapper[4904]: I0223 11:30:22.194021 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t48sj_411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2/cp-metrics/0.log" Feb 23 11:30:22 crc kubenswrapper[4904]: I0223 11:30:22.198342 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t48sj_411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2/cp-reloader/0.log" Feb 23 11:30:22 crc kubenswrapper[4904]: I0223 11:30:22.393947 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t48sj_411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2/kube-rbac-proxy/0.log" Feb 23 11:30:22 crc kubenswrapper[4904]: I0223 11:30:22.417748 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t48sj_411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2/frr-metrics/0.log" Feb 23 11:30:22 crc kubenswrapper[4904]: I0223 11:30:22.458826 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t48sj_411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2/kube-rbac-proxy-frr/0.log" Feb 23 11:30:22 crc kubenswrapper[4904]: I0223 11:30:22.585310 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t48sj_411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2/reloader/0.log" Feb 23 11:30:22 crc kubenswrapper[4904]: I0223 11:30:22.647261 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-54msh_a84cd4c2-05fc-43f7-8dec-16587923b06f/frr-k8s-webhook-server/0.log" Feb 23 11:30:22 crc kubenswrapper[4904]: I0223 11:30:22.864809 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-67c5b68f57-j5bsq_d0ffe580-b2b6-41cd-ad6b-683ead0174c5/manager/0.log" Feb 23 11:30:22 crc kubenswrapper[4904]: I0223 11:30:22.991584 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-755d685c4c-jlswn_326de05e-dfa0-46e8-be69-d0cd954deb8a/webhook-server/0.log" Feb 23 11:30:23 crc kubenswrapper[4904]: I0223 11:30:23.125366 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-97svr_fcd8dd0f-8522-4ce8-873b-a7821cba8bd7/kube-rbac-proxy/0.log" Feb 23 11:30:23 crc kubenswrapper[4904]: I0223 11:30:23.738364 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-97svr_fcd8dd0f-8522-4ce8-873b-a7821cba8bd7/speaker/0.log" Feb 23 11:30:24 crc kubenswrapper[4904]: I0223 11:30:24.084777 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t48sj_411ec8c5-7c28-46e7-8ae8-b3d9a9c603c2/frr/0.log" Feb 23 11:30:32 crc kubenswrapper[4904]: I0223 11:30:32.255924 4904 scope.go:117] "RemoveContainer" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" Feb 23 11:30:32 crc kubenswrapper[4904]: E0223 11:30:32.257209 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:30:38 crc kubenswrapper[4904]: I0223 11:30:38.660099 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk_13f8b25e-a541-452d-abba-022d7d0f2ae1/util/0.log" Feb 23 11:30:38 crc kubenswrapper[4904]: I0223 11:30:38.826641 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk_13f8b25e-a541-452d-abba-022d7d0f2ae1/pull/0.log" Feb 23 11:30:38 crc kubenswrapper[4904]: I0223 11:30:38.853170 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk_13f8b25e-a541-452d-abba-022d7d0f2ae1/util/0.log" Feb 23 11:30:38 crc kubenswrapper[4904]: I0223 11:30:38.919282 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk_13f8b25e-a541-452d-abba-022d7d0f2ae1/pull/0.log" Feb 23 11:30:39 crc kubenswrapper[4904]: I0223 11:30:39.027387 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk_13f8b25e-a541-452d-abba-022d7d0f2ae1/util/0.log" Feb 23 11:30:39 crc kubenswrapper[4904]: I0223 11:30:39.028105 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk_13f8b25e-a541-452d-abba-022d7d0f2ae1/pull/0.log" Feb 23 11:30:39 crc kubenswrapper[4904]: I0223 11:30:39.054555 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nvlxk_13f8b25e-a541-452d-abba-022d7d0f2ae1/extract/0.log" Feb 23 11:30:39 crc kubenswrapper[4904]: I0223 11:30:39.202669 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m_279e6193-ebdd-4c4e-8f35-03f3d04040b1/util/0.log" Feb 23 11:30:39 crc kubenswrapper[4904]: I0223 11:30:39.387465 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m_279e6193-ebdd-4c4e-8f35-03f3d04040b1/pull/0.log" Feb 23 11:30:39 crc kubenswrapper[4904]: I0223 11:30:39.398299 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m_279e6193-ebdd-4c4e-8f35-03f3d04040b1/pull/0.log" Feb 23 11:30:39 crc kubenswrapper[4904]: I0223 11:30:39.456054 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m_279e6193-ebdd-4c4e-8f35-03f3d04040b1/util/0.log" Feb 23 11:30:39 crc kubenswrapper[4904]: I0223 11:30:39.565951 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m_279e6193-ebdd-4c4e-8f35-03f3d04040b1/util/0.log" Feb 23 11:30:39 crc kubenswrapper[4904]: I0223 11:30:39.571419 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m_279e6193-ebdd-4c4e-8f35-03f3d04040b1/pull/0.log" Feb 23 11:30:40 crc kubenswrapper[4904]: I0223 11:30:40.299866 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qvr7m_279e6193-ebdd-4c4e-8f35-03f3d04040b1/extract/0.log" Feb 23 11:30:40 crc kubenswrapper[4904]: I0223 11:30:40.466856 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n6zjc_596e43b6-0031-4018-bce2-420a012e6458/extract-utilities/0.log" Feb 23 11:30:40 crc kubenswrapper[4904]: I0223 11:30:40.485113 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n6zjc_596e43b6-0031-4018-bce2-420a012e6458/extract-utilities/0.log" Feb 23 11:30:40 crc kubenswrapper[4904]: I0223 11:30:40.550923 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n6zjc_596e43b6-0031-4018-bce2-420a012e6458/extract-content/0.log" Feb 23 11:30:40 crc kubenswrapper[4904]: I0223 11:30:40.593607 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n6zjc_596e43b6-0031-4018-bce2-420a012e6458/extract-content/0.log" Feb 23 11:30:40 crc kubenswrapper[4904]: I0223 11:30:40.731238 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n6zjc_596e43b6-0031-4018-bce2-420a012e6458/extract-utilities/0.log" Feb 23 11:30:40 crc kubenswrapper[4904]: I0223 11:30:40.754260 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n6zjc_596e43b6-0031-4018-bce2-420a012e6458/extract-content/0.log" Feb 23 11:30:41 crc kubenswrapper[4904]: I0223 11:30:41.003285 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvnlh_280a3854-63b9-459c-9d64-b18ecf250e5a/extract-utilities/0.log" Feb 23 11:30:41 crc kubenswrapper[4904]: I0223 11:30:41.142933 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvnlh_280a3854-63b9-459c-9d64-b18ecf250e5a/extract-utilities/0.log" Feb 23 11:30:41 crc kubenswrapper[4904]: I0223 11:30:41.249549 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvnlh_280a3854-63b9-459c-9d64-b18ecf250e5a/extract-content/0.log" Feb 23 11:30:41 crc kubenswrapper[4904]: I0223 11:30:41.277959 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvnlh_280a3854-63b9-459c-9d64-b18ecf250e5a/extract-content/0.log" Feb 23 11:30:41 crc kubenswrapper[4904]: I0223 11:30:41.448832 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n6zjc_596e43b6-0031-4018-bce2-420a012e6458/registry-server/0.log" Feb 23 11:30:41 crc kubenswrapper[4904]: I0223 11:30:41.534436 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvnlh_280a3854-63b9-459c-9d64-b18ecf250e5a/extract-utilities/0.log" Feb 23 11:30:41 crc kubenswrapper[4904]: I0223 11:30:41.574849 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvnlh_280a3854-63b9-459c-9d64-b18ecf250e5a/extract-content/0.log" Feb 23 11:30:41 crc kubenswrapper[4904]: I0223 11:30:41.718656 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r_1b89453f-1d71-498c-a07e-96fcb9b64f97/util/0.log" Feb 23 11:30:41 crc kubenswrapper[4904]: I0223 11:30:41.992044 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r_1b89453f-1d71-498c-a07e-96fcb9b64f97/pull/0.log" Feb 23 11:30:42 crc kubenswrapper[4904]: I0223 11:30:42.051265 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r_1b89453f-1d71-498c-a07e-96fcb9b64f97/util/0.log" Feb 23 11:30:42 crc kubenswrapper[4904]: I0223 11:30:42.063020 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r_1b89453f-1d71-498c-a07e-96fcb9b64f97/pull/0.log" Feb 23 11:30:42 crc kubenswrapper[4904]: I0223 11:30:42.323527 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvnlh_280a3854-63b9-459c-9d64-b18ecf250e5a/registry-server/0.log" Feb 23 11:30:42 crc kubenswrapper[4904]: I0223 11:30:42.324304 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r_1b89453f-1d71-498c-a07e-96fcb9b64f97/util/0.log" Feb 23 11:30:42 crc kubenswrapper[4904]: I0223 11:30:42.350615 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r_1b89453f-1d71-498c-a07e-96fcb9b64f97/pull/0.log" Feb 23 11:30:42 crc kubenswrapper[4904]: I0223 11:30:42.353805 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawhd4r_1b89453f-1d71-498c-a07e-96fcb9b64f97/extract/0.log" Feb 23 11:30:42 crc kubenswrapper[4904]: I0223 11:30:42.475433 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-kxhz2_523e6ad2-ad13-42ec-8352-6451ab42c338/marketplace-operator/0.log" Feb 23 11:30:42 crc kubenswrapper[4904]: I0223 11:30:42.521419 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6xnmt_59757d7b-1b5f-4019-bae0-6605e1e7a870/extract-utilities/0.log" Feb 23 11:30:42 crc kubenswrapper[4904]: I0223 11:30:42.665396 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6xnmt_59757d7b-1b5f-4019-bae0-6605e1e7a870/extract-utilities/0.log" Feb 23 11:30:42 crc kubenswrapper[4904]: I0223 11:30:42.674270 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6xnmt_59757d7b-1b5f-4019-bae0-6605e1e7a870/extract-content/0.log" Feb 23 11:30:42 crc kubenswrapper[4904]: I0223 11:30:42.680389 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6xnmt_59757d7b-1b5f-4019-bae0-6605e1e7a870/extract-content/0.log" Feb 23 11:30:42 crc kubenswrapper[4904]: I0223 11:30:42.844369 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6xnmt_59757d7b-1b5f-4019-bae0-6605e1e7a870/extract-utilities/0.log" Feb 23 11:30:42 crc kubenswrapper[4904]: I0223 11:30:42.848479 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6xnmt_59757d7b-1b5f-4019-bae0-6605e1e7a870/extract-content/0.log" Feb 23 11:30:42 crc kubenswrapper[4904]: I0223 11:30:42.916122 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mjksh_16988337-ed54-4223-87e6-7c7335b35e25/extract-utilities/0.log" Feb 23 11:30:43 crc kubenswrapper[4904]: I0223 11:30:43.042611 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6xnmt_59757d7b-1b5f-4019-bae0-6605e1e7a870/registry-server/0.log" Feb 23 11:30:43 crc kubenswrapper[4904]: I0223 11:30:43.168987 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mjksh_16988337-ed54-4223-87e6-7c7335b35e25/extract-content/0.log" Feb 23 11:30:43 crc kubenswrapper[4904]: I0223 11:30:43.198019 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mjksh_16988337-ed54-4223-87e6-7c7335b35e25/extract-utilities/0.log" Feb 23 11:30:43 crc kubenswrapper[4904]: I0223 11:30:43.216882 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mjksh_16988337-ed54-4223-87e6-7c7335b35e25/extract-content/0.log" Feb 23 11:30:43 crc kubenswrapper[4904]: I0223 11:30:43.358226 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mjksh_16988337-ed54-4223-87e6-7c7335b35e25/extract-utilities/0.log" Feb 23 11:30:43 crc kubenswrapper[4904]: I0223 11:30:43.373066 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mjksh_16988337-ed54-4223-87e6-7c7335b35e25/extract-content/0.log" Feb 23 11:30:44 crc kubenswrapper[4904]: I0223 11:30:44.095030 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mjksh_16988337-ed54-4223-87e6-7c7335b35e25/registry-server/0.log" Feb 23 11:30:47 crc kubenswrapper[4904]: I0223 11:30:47.266603 4904 scope.go:117] "RemoveContainer" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" Feb 23 11:30:47 crc kubenswrapper[4904]: E0223 11:30:47.267521 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:30:47 crc kubenswrapper[4904]: I0223 11:30:47.825769 4904 scope.go:117] "RemoveContainer" containerID="2bdd0265d678d0660fcfb90519a61ade9b5fb40a572a3d91db3ad92df4d382d9" Feb 23 11:30:58 crc kubenswrapper[4904]: I0223 11:30:58.751839 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-647bc58d4b-wz9ln_8341f5fc-eb1e-4d05-935d-653320cfac4a/prometheus-operator-admission-webhook/0.log" Feb 23 11:30:58 crc kubenswrapper[4904]: I0223 11:30:58.769805 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-647bc58d4b-rfpvn_82047769-4adf-4b19-bd85-04bd9c681616/prometheus-operator-admission-webhook/0.log" Feb 23 11:30:58 crc kubenswrapper[4904]: I0223 11:30:58.770590 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-mzcx5_0575e4d3-040f-492f-92ff-ea6a433ce2a2/prometheus-operator/0.log" Feb 23 11:30:58 crc kubenswrapper[4904]: I0223 11:30:58.951739 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-mrt8q_6b2753c5-297c-45a4-be5d-66e00f49448e/operator/0.log" Feb 23 11:30:58 crc kubenswrapper[4904]: I0223 11:30:58.955807 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-6dm8x_e04b69a5-a057-4e3d-81bf-509d8cce4ec4/perses-operator/0.log" Feb 23 11:30:59 crc kubenswrapper[4904]: I0223 11:30:59.256559 4904 scope.go:117] "RemoveContainer" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" Feb 23 11:30:59 crc kubenswrapper[4904]: E0223 11:30:59.256852 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:31:13 crc kubenswrapper[4904]: I0223 11:31:13.257590 4904 scope.go:117] "RemoveContainer" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" Feb 23 11:31:13 crc kubenswrapper[4904]: E0223 11:31:13.258543 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:31:14 crc kubenswrapper[4904]: I0223 11:31:14.958544 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fs74l"] Feb 23 11:31:14 crc kubenswrapper[4904]: E0223 11:31:14.959502 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75e065d9-11cf-4048-aafd-05b4bd2c1a52" containerName="collect-profiles" Feb 23 11:31:14 crc kubenswrapper[4904]: I0223 11:31:14.959520 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="75e065d9-11cf-4048-aafd-05b4bd2c1a52" containerName="collect-profiles" Feb 23 11:31:14 crc kubenswrapper[4904]: I0223 11:31:14.959768 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="75e065d9-11cf-4048-aafd-05b4bd2c1a52" containerName="collect-profiles" Feb 23 11:31:14 crc kubenswrapper[4904]: I0223 11:31:14.961485 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fs74l" Feb 23 11:31:14 crc kubenswrapper[4904]: I0223 11:31:14.972238 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fs74l"] Feb 23 11:31:15 crc kubenswrapper[4904]: I0223 11:31:15.032707 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4f1304c-1bd1-42ad-8119-43766cdc49cf-utilities\") pod \"redhat-marketplace-fs74l\" (UID: \"a4f1304c-1bd1-42ad-8119-43766cdc49cf\") " pod="openshift-marketplace/redhat-marketplace-fs74l" Feb 23 11:31:15 crc kubenswrapper[4904]: I0223 11:31:15.032775 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mg85\" (UniqueName: \"kubernetes.io/projected/a4f1304c-1bd1-42ad-8119-43766cdc49cf-kube-api-access-6mg85\") pod \"redhat-marketplace-fs74l\" (UID: \"a4f1304c-1bd1-42ad-8119-43766cdc49cf\") " pod="openshift-marketplace/redhat-marketplace-fs74l" Feb 23 11:31:15 crc kubenswrapper[4904]: I0223 11:31:15.032830 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4f1304c-1bd1-42ad-8119-43766cdc49cf-catalog-content\") pod \"redhat-marketplace-fs74l\" (UID: \"a4f1304c-1bd1-42ad-8119-43766cdc49cf\") " pod="openshift-marketplace/redhat-marketplace-fs74l" Feb 23 11:31:15 crc kubenswrapper[4904]: I0223 11:31:15.134925 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4f1304c-1bd1-42ad-8119-43766cdc49cf-utilities\") pod \"redhat-marketplace-fs74l\" (UID: \"a4f1304c-1bd1-42ad-8119-43766cdc49cf\") " pod="openshift-marketplace/redhat-marketplace-fs74l" Feb 23 11:31:15 crc kubenswrapper[4904]: I0223 11:31:15.134991 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mg85\" (UniqueName: \"kubernetes.io/projected/a4f1304c-1bd1-42ad-8119-43766cdc49cf-kube-api-access-6mg85\") pod \"redhat-marketplace-fs74l\" (UID: \"a4f1304c-1bd1-42ad-8119-43766cdc49cf\") " pod="openshift-marketplace/redhat-marketplace-fs74l" Feb 23 11:31:15 crc kubenswrapper[4904]: I0223 11:31:15.135057 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4f1304c-1bd1-42ad-8119-43766cdc49cf-catalog-content\") pod \"redhat-marketplace-fs74l\" (UID: \"a4f1304c-1bd1-42ad-8119-43766cdc49cf\") " pod="openshift-marketplace/redhat-marketplace-fs74l" Feb 23 11:31:15 crc kubenswrapper[4904]: I0223 11:31:15.135704 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4f1304c-1bd1-42ad-8119-43766cdc49cf-catalog-content\") pod \"redhat-marketplace-fs74l\" (UID: \"a4f1304c-1bd1-42ad-8119-43766cdc49cf\") " pod="openshift-marketplace/redhat-marketplace-fs74l" Feb 23 11:31:15 crc kubenswrapper[4904]: I0223 11:31:15.135832 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4f1304c-1bd1-42ad-8119-43766cdc49cf-utilities\") pod \"redhat-marketplace-fs74l\" (UID: \"a4f1304c-1bd1-42ad-8119-43766cdc49cf\") " pod="openshift-marketplace/redhat-marketplace-fs74l" Feb 23 11:31:15 crc kubenswrapper[4904]: I0223 11:31:15.160048 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mg85\" (UniqueName: \"kubernetes.io/projected/a4f1304c-1bd1-42ad-8119-43766cdc49cf-kube-api-access-6mg85\") pod \"redhat-marketplace-fs74l\" (UID: \"a4f1304c-1bd1-42ad-8119-43766cdc49cf\") " pod="openshift-marketplace/redhat-marketplace-fs74l" Feb 23 11:31:15 crc kubenswrapper[4904]: I0223 11:31:15.278448 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fs74l" Feb 23 11:31:15 crc kubenswrapper[4904]: I0223 11:31:15.943330 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fs74l"] Feb 23 11:31:16 crc kubenswrapper[4904]: I0223 11:31:16.647691 4904 generic.go:334] "Generic (PLEG): container finished" podID="a4f1304c-1bd1-42ad-8119-43766cdc49cf" containerID="d022a93c3f4a3286cf5f5d5d7a58d1bce88667be65740843aa772b9a52847e04" exitCode=0 Feb 23 11:31:16 crc kubenswrapper[4904]: I0223 11:31:16.647780 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fs74l" event={"ID":"a4f1304c-1bd1-42ad-8119-43766cdc49cf","Type":"ContainerDied","Data":"d022a93c3f4a3286cf5f5d5d7a58d1bce88667be65740843aa772b9a52847e04"} Feb 23 11:31:16 crc kubenswrapper[4904]: I0223 11:31:16.648242 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fs74l" event={"ID":"a4f1304c-1bd1-42ad-8119-43766cdc49cf","Type":"ContainerStarted","Data":"d6c0a1b6c53d9865de0071ddd650d46ef13c8ccbd657af6111ceda14f1104f5f"} Feb 23 11:31:16 crc kubenswrapper[4904]: I0223 11:31:16.650873 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 11:31:17 crc kubenswrapper[4904]: I0223 11:31:17.663124 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fs74l" event={"ID":"a4f1304c-1bd1-42ad-8119-43766cdc49cf","Type":"ContainerStarted","Data":"1dee82f72cb3e2cee3b0c15de676af99356537e6acfb87cf9c1b8ced5f6d5c6a"} Feb 23 11:31:18 crc kubenswrapper[4904]: I0223 11:31:18.684673 4904 generic.go:334] "Generic (PLEG): container finished" podID="a4f1304c-1bd1-42ad-8119-43766cdc49cf" containerID="1dee82f72cb3e2cee3b0c15de676af99356537e6acfb87cf9c1b8ced5f6d5c6a" exitCode=0 Feb 23 11:31:18 crc kubenswrapper[4904]: I0223 11:31:18.684806 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fs74l" event={"ID":"a4f1304c-1bd1-42ad-8119-43766cdc49cf","Type":"ContainerDied","Data":"1dee82f72cb3e2cee3b0c15de676af99356537e6acfb87cf9c1b8ced5f6d5c6a"} Feb 23 11:31:19 crc kubenswrapper[4904]: I0223 11:31:19.697020 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fs74l" event={"ID":"a4f1304c-1bd1-42ad-8119-43766cdc49cf","Type":"ContainerStarted","Data":"05f83f027d4b0992cc03c52b706414b8edbcb6cf71c92f93429b0534bb0a2253"} Feb 23 11:31:19 crc kubenswrapper[4904]: I0223 11:31:19.726663 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fs74l" podStartSLOduration=3.255257538 podStartE2EDuration="5.726641935s" podCreationTimestamp="2026-02-23 11:31:14 +0000 UTC" firstStartedPulling="2026-02-23 11:31:16.650660355 +0000 UTC m=+5110.071033858" lastFinishedPulling="2026-02-23 11:31:19.122044742 +0000 UTC m=+5112.542418255" observedRunningTime="2026-02-23 11:31:19.714440027 +0000 UTC m=+5113.134813580" watchObservedRunningTime="2026-02-23 11:31:19.726641935 +0000 UTC m=+5113.147015458" Feb 23 11:31:25 crc kubenswrapper[4904]: I0223 11:31:25.258555 4904 scope.go:117] "RemoveContainer" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" Feb 23 11:31:25 crc kubenswrapper[4904]: E0223 11:31:25.259529 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:31:25 crc kubenswrapper[4904]: I0223 11:31:25.279338 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fs74l" Feb 23 11:31:25 crc kubenswrapper[4904]: I0223 11:31:25.279817 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fs74l" Feb 23 11:31:25 crc kubenswrapper[4904]: I0223 11:31:25.344952 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fs74l" Feb 23 11:31:25 crc kubenswrapper[4904]: I0223 11:31:25.869479 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fs74l" Feb 23 11:31:25 crc kubenswrapper[4904]: I0223 11:31:25.932821 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fs74l"] Feb 23 11:31:27 crc kubenswrapper[4904]: I0223 11:31:27.833093 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fs74l" podUID="a4f1304c-1bd1-42ad-8119-43766cdc49cf" containerName="registry-server" containerID="cri-o://05f83f027d4b0992cc03c52b706414b8edbcb6cf71c92f93429b0534bb0a2253" gracePeriod=2 Feb 23 11:31:28 crc kubenswrapper[4904]: I0223 11:31:28.330259 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fs74l" Feb 23 11:31:28 crc kubenswrapper[4904]: I0223 11:31:28.444018 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mg85\" (UniqueName: \"kubernetes.io/projected/a4f1304c-1bd1-42ad-8119-43766cdc49cf-kube-api-access-6mg85\") pod \"a4f1304c-1bd1-42ad-8119-43766cdc49cf\" (UID: \"a4f1304c-1bd1-42ad-8119-43766cdc49cf\") " Feb 23 11:31:28 crc kubenswrapper[4904]: I0223 11:31:28.444426 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4f1304c-1bd1-42ad-8119-43766cdc49cf-catalog-content\") pod \"a4f1304c-1bd1-42ad-8119-43766cdc49cf\" (UID: \"a4f1304c-1bd1-42ad-8119-43766cdc49cf\") " Feb 23 11:31:28 crc kubenswrapper[4904]: I0223 11:31:28.444490 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4f1304c-1bd1-42ad-8119-43766cdc49cf-utilities\") pod \"a4f1304c-1bd1-42ad-8119-43766cdc49cf\" (UID: \"a4f1304c-1bd1-42ad-8119-43766cdc49cf\") " Feb 23 11:31:28 crc kubenswrapper[4904]: I0223 11:31:28.445296 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4f1304c-1bd1-42ad-8119-43766cdc49cf-utilities" (OuterVolumeSpecName: "utilities") pod "a4f1304c-1bd1-42ad-8119-43766cdc49cf" (UID: "a4f1304c-1bd1-42ad-8119-43766cdc49cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:31:28 crc kubenswrapper[4904]: I0223 11:31:28.445601 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4f1304c-1bd1-42ad-8119-43766cdc49cf-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 11:31:28 crc kubenswrapper[4904]: I0223 11:31:28.450631 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f1304c-1bd1-42ad-8119-43766cdc49cf-kube-api-access-6mg85" (OuterVolumeSpecName: "kube-api-access-6mg85") pod "a4f1304c-1bd1-42ad-8119-43766cdc49cf" (UID: "a4f1304c-1bd1-42ad-8119-43766cdc49cf"). InnerVolumeSpecName "kube-api-access-6mg85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 11:31:28 crc kubenswrapper[4904]: I0223 11:31:28.469016 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4f1304c-1bd1-42ad-8119-43766cdc49cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4f1304c-1bd1-42ad-8119-43766cdc49cf" (UID: "a4f1304c-1bd1-42ad-8119-43766cdc49cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:31:28 crc kubenswrapper[4904]: I0223 11:31:28.547560 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mg85\" (UniqueName: \"kubernetes.io/projected/a4f1304c-1bd1-42ad-8119-43766cdc49cf-kube-api-access-6mg85\") on node \"crc\" DevicePath \"\"" Feb 23 11:31:28 crc kubenswrapper[4904]: I0223 11:31:28.547807 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4f1304c-1bd1-42ad-8119-43766cdc49cf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 11:31:28 crc kubenswrapper[4904]: I0223 11:31:28.848564 4904 generic.go:334] "Generic (PLEG): container finished" podID="a4f1304c-1bd1-42ad-8119-43766cdc49cf" containerID="05f83f027d4b0992cc03c52b706414b8edbcb6cf71c92f93429b0534bb0a2253" exitCode=0 Feb 23 11:31:28 crc kubenswrapper[4904]: I0223 11:31:28.848630 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fs74l" event={"ID":"a4f1304c-1bd1-42ad-8119-43766cdc49cf","Type":"ContainerDied","Data":"05f83f027d4b0992cc03c52b706414b8edbcb6cf71c92f93429b0534bb0a2253"} Feb 23 11:31:28 crc kubenswrapper[4904]: I0223 11:31:28.848672 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fs74l" Feb 23 11:31:28 crc kubenswrapper[4904]: I0223 11:31:28.848699 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fs74l" event={"ID":"a4f1304c-1bd1-42ad-8119-43766cdc49cf","Type":"ContainerDied","Data":"d6c0a1b6c53d9865de0071ddd650d46ef13c8ccbd657af6111ceda14f1104f5f"} Feb 23 11:31:28 crc kubenswrapper[4904]: I0223 11:31:28.848757 4904 scope.go:117] "RemoveContainer" containerID="05f83f027d4b0992cc03c52b706414b8edbcb6cf71c92f93429b0534bb0a2253" Feb 23 11:31:28 crc kubenswrapper[4904]: I0223 11:31:28.892255 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fs74l"] Feb 23 11:31:28 crc kubenswrapper[4904]: I0223 11:31:28.904039 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fs74l"] Feb 23 11:31:28 crc kubenswrapper[4904]: I0223 11:31:28.904762 4904 scope.go:117] "RemoveContainer" containerID="1dee82f72cb3e2cee3b0c15de676af99356537e6acfb87cf9c1b8ced5f6d5c6a" Feb 23 11:31:28 crc kubenswrapper[4904]: I0223 11:31:28.928118 4904 scope.go:117] "RemoveContainer" containerID="d022a93c3f4a3286cf5f5d5d7a58d1bce88667be65740843aa772b9a52847e04" Feb 23 11:31:29 crc kubenswrapper[4904]: I0223 11:31:29.002008 4904 scope.go:117] "RemoveContainer" containerID="05f83f027d4b0992cc03c52b706414b8edbcb6cf71c92f93429b0534bb0a2253" Feb 23 11:31:29 crc kubenswrapper[4904]: E0223 11:31:29.002695 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05f83f027d4b0992cc03c52b706414b8edbcb6cf71c92f93429b0534bb0a2253\": container with ID starting with 05f83f027d4b0992cc03c52b706414b8edbcb6cf71c92f93429b0534bb0a2253 not found: ID does not exist" containerID="05f83f027d4b0992cc03c52b706414b8edbcb6cf71c92f93429b0534bb0a2253" Feb 23 11:31:29 crc kubenswrapper[4904]: I0223 11:31:29.002770 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05f83f027d4b0992cc03c52b706414b8edbcb6cf71c92f93429b0534bb0a2253"} err="failed to get container status \"05f83f027d4b0992cc03c52b706414b8edbcb6cf71c92f93429b0534bb0a2253\": rpc error: code = NotFound desc = could not find container \"05f83f027d4b0992cc03c52b706414b8edbcb6cf71c92f93429b0534bb0a2253\": container with ID starting with 05f83f027d4b0992cc03c52b706414b8edbcb6cf71c92f93429b0534bb0a2253 not found: ID does not exist" Feb 23 11:31:29 crc kubenswrapper[4904]: I0223 11:31:29.002796 4904 scope.go:117] "RemoveContainer" containerID="1dee82f72cb3e2cee3b0c15de676af99356537e6acfb87cf9c1b8ced5f6d5c6a" Feb 23 11:31:29 crc kubenswrapper[4904]: E0223 11:31:29.003281 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dee82f72cb3e2cee3b0c15de676af99356537e6acfb87cf9c1b8ced5f6d5c6a\": container with ID starting with 1dee82f72cb3e2cee3b0c15de676af99356537e6acfb87cf9c1b8ced5f6d5c6a not found: ID does not exist" containerID="1dee82f72cb3e2cee3b0c15de676af99356537e6acfb87cf9c1b8ced5f6d5c6a" Feb 23 11:31:29 crc kubenswrapper[4904]: I0223 11:31:29.003320 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dee82f72cb3e2cee3b0c15de676af99356537e6acfb87cf9c1b8ced5f6d5c6a"} err="failed to get container status \"1dee82f72cb3e2cee3b0c15de676af99356537e6acfb87cf9c1b8ced5f6d5c6a\": rpc error: code = NotFound desc = could not find container \"1dee82f72cb3e2cee3b0c15de676af99356537e6acfb87cf9c1b8ced5f6d5c6a\": container with ID starting with 1dee82f72cb3e2cee3b0c15de676af99356537e6acfb87cf9c1b8ced5f6d5c6a not found: ID does not exist" Feb 23 11:31:29 crc kubenswrapper[4904]: I0223 11:31:29.003339 4904 scope.go:117] "RemoveContainer" containerID="d022a93c3f4a3286cf5f5d5d7a58d1bce88667be65740843aa772b9a52847e04" Feb 23 11:31:29 crc kubenswrapper[4904]: E0223 11:31:29.003777 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d022a93c3f4a3286cf5f5d5d7a58d1bce88667be65740843aa772b9a52847e04\": container with ID starting with d022a93c3f4a3286cf5f5d5d7a58d1bce88667be65740843aa772b9a52847e04 not found: ID does not exist" containerID="d022a93c3f4a3286cf5f5d5d7a58d1bce88667be65740843aa772b9a52847e04" Feb 23 11:31:29 crc kubenswrapper[4904]: I0223 11:31:29.003809 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d022a93c3f4a3286cf5f5d5d7a58d1bce88667be65740843aa772b9a52847e04"} err="failed to get container status \"d022a93c3f4a3286cf5f5d5d7a58d1bce88667be65740843aa772b9a52847e04\": rpc error: code = NotFound desc = could not find container \"d022a93c3f4a3286cf5f5d5d7a58d1bce88667be65740843aa772b9a52847e04\": container with ID starting with d022a93c3f4a3286cf5f5d5d7a58d1bce88667be65740843aa772b9a52847e04 not found: ID does not exist" Feb 23 11:31:29 crc kubenswrapper[4904]: I0223 11:31:29.281930 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4f1304c-1bd1-42ad-8119-43766cdc49cf" path="/var/lib/kubelet/pods/a4f1304c-1bd1-42ad-8119-43766cdc49cf/volumes" Feb 23 11:31:40 crc kubenswrapper[4904]: I0223 11:31:40.264085 4904 scope.go:117] "RemoveContainer" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" Feb 23 11:31:40 crc kubenswrapper[4904]: E0223 11:31:40.264913 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:31:53 crc kubenswrapper[4904]: I0223 11:31:53.257763 4904 scope.go:117] "RemoveContainer" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" Feb 23 11:31:53 crc kubenswrapper[4904]: E0223 11:31:53.258782 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:32:07 crc kubenswrapper[4904]: I0223 11:32:07.261704 4904 scope.go:117] "RemoveContainer" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" Feb 23 11:32:07 crc kubenswrapper[4904]: E0223 11:32:07.262630 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:32:18 crc kubenswrapper[4904]: I0223 11:32:18.256314 4904 scope.go:117] "RemoveContainer" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" Feb 23 11:32:18 crc kubenswrapper[4904]: E0223 11:32:18.257184 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:32:33 crc kubenswrapper[4904]: I0223 11:32:33.255762 4904 scope.go:117] "RemoveContainer" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" Feb 23 11:32:33 crc kubenswrapper[4904]: E0223 11:32:33.256637 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:32:44 crc kubenswrapper[4904]: I0223 11:32:44.256518 4904 scope.go:117] "RemoveContainer" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" Feb 23 11:32:44 crc kubenswrapper[4904]: E0223 11:32:44.257728 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:32:47 crc kubenswrapper[4904]: I0223 11:32:47.940126 4904 scope.go:117] "RemoveContainer" containerID="6f07abf11f78a85a0b5f70992959bebfc369e03abd5bc799fdb704d12e9dacfe" Feb 23 11:32:55 crc kubenswrapper[4904]: I0223 11:32:55.929494 4904 generic.go:334] "Generic (PLEG): container finished" podID="15025a99-7d04-43e3-8319-51f4a350663f" containerID="78819521ef7286da36659bf94bd9c82c8aef2d688350102c6883bdedfb6898e1" exitCode=0 Feb 23 11:32:55 crc kubenswrapper[4904]: I0223 11:32:55.929582 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-thlbh/must-gather-mh759" event={"ID":"15025a99-7d04-43e3-8319-51f4a350663f","Type":"ContainerDied","Data":"78819521ef7286da36659bf94bd9c82c8aef2d688350102c6883bdedfb6898e1"} Feb 23 11:32:55 crc kubenswrapper[4904]: I0223 11:32:55.930548 4904 scope.go:117] "RemoveContainer" containerID="78819521ef7286da36659bf94bd9c82c8aef2d688350102c6883bdedfb6898e1" Feb 23 11:32:56 crc kubenswrapper[4904]: I0223 11:32:56.060460 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-thlbh_must-gather-mh759_15025a99-7d04-43e3-8319-51f4a350663f/gather/0.log" Feb 23 11:32:59 crc kubenswrapper[4904]: I0223 11:32:59.257803 4904 scope.go:117] "RemoveContainer" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" Feb 23 11:32:59 crc kubenswrapper[4904]: E0223 11:32:59.258612 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:33:03 crc kubenswrapper[4904]: I0223 11:33:03.804227 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-thlbh/must-gather-mh759"] Feb 23 11:33:03 crc kubenswrapper[4904]: I0223 11:33:03.804887 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-thlbh/must-gather-mh759" podUID="15025a99-7d04-43e3-8319-51f4a350663f" containerName="copy" containerID="cri-o://ea69482b9f5e0b05b2545b5416e47b010e8091bb8024c1449001fca79daa79cb" gracePeriod=2 Feb 23 11:33:03 crc kubenswrapper[4904]: I0223 11:33:03.821081 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-thlbh/must-gather-mh759"] Feb 23 11:33:04 crc kubenswrapper[4904]: I0223 11:33:04.017764 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-thlbh_must-gather-mh759_15025a99-7d04-43e3-8319-51f4a350663f/copy/0.log" Feb 23 11:33:04 crc kubenswrapper[4904]: I0223 11:33:04.020898 4904 generic.go:334] "Generic (PLEG): container finished" podID="15025a99-7d04-43e3-8319-51f4a350663f" containerID="ea69482b9f5e0b05b2545b5416e47b010e8091bb8024c1449001fca79daa79cb" exitCode=143 Feb 23 11:33:04 crc kubenswrapper[4904]: I0223 11:33:04.287552 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-thlbh_must-gather-mh759_15025a99-7d04-43e3-8319-51f4a350663f/copy/0.log" Feb 23 11:33:04 crc kubenswrapper[4904]: I0223 11:33:04.291927 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thlbh/must-gather-mh759" Feb 23 11:33:04 crc kubenswrapper[4904]: I0223 11:33:04.566218 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2nw5\" (UniqueName: \"kubernetes.io/projected/15025a99-7d04-43e3-8319-51f4a350663f-kube-api-access-m2nw5\") pod \"15025a99-7d04-43e3-8319-51f4a350663f\" (UID: \"15025a99-7d04-43e3-8319-51f4a350663f\") " Feb 23 11:33:04 crc kubenswrapper[4904]: I0223 11:33:04.566427 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/15025a99-7d04-43e3-8319-51f4a350663f-must-gather-output\") pod \"15025a99-7d04-43e3-8319-51f4a350663f\" (UID: \"15025a99-7d04-43e3-8319-51f4a350663f\") " Feb 23 11:33:04 crc kubenswrapper[4904]: I0223 11:33:04.597858 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15025a99-7d04-43e3-8319-51f4a350663f-kube-api-access-m2nw5" (OuterVolumeSpecName: "kube-api-access-m2nw5") pod "15025a99-7d04-43e3-8319-51f4a350663f" (UID: "15025a99-7d04-43e3-8319-51f4a350663f"). InnerVolumeSpecName "kube-api-access-m2nw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 11:33:04 crc kubenswrapper[4904]: I0223 11:33:04.670342 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2nw5\" (UniqueName: \"kubernetes.io/projected/15025a99-7d04-43e3-8319-51f4a350663f-kube-api-access-m2nw5\") on node \"crc\" DevicePath \"\"" Feb 23 11:33:04 crc kubenswrapper[4904]: I0223 11:33:04.808131 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15025a99-7d04-43e3-8319-51f4a350663f-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "15025a99-7d04-43e3-8319-51f4a350663f" (UID: "15025a99-7d04-43e3-8319-51f4a350663f"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 11:33:04 crc kubenswrapper[4904]: I0223 11:33:04.876119 4904 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/15025a99-7d04-43e3-8319-51f4a350663f-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 23 11:33:05 crc kubenswrapper[4904]: I0223 11:33:05.031410 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-thlbh_must-gather-mh759_15025a99-7d04-43e3-8319-51f4a350663f/copy/0.log" Feb 23 11:33:05 crc kubenswrapper[4904]: I0223 11:33:05.031945 4904 scope.go:117] "RemoveContainer" containerID="ea69482b9f5e0b05b2545b5416e47b010e8091bb8024c1449001fca79daa79cb" Feb 23 11:33:05 crc kubenswrapper[4904]: I0223 11:33:05.032003 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thlbh/must-gather-mh759" Feb 23 11:33:05 crc kubenswrapper[4904]: I0223 11:33:05.060571 4904 scope.go:117] "RemoveContainer" containerID="78819521ef7286da36659bf94bd9c82c8aef2d688350102c6883bdedfb6898e1" Feb 23 11:33:05 crc kubenswrapper[4904]: I0223 11:33:05.269669 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15025a99-7d04-43e3-8319-51f4a350663f" path="/var/lib/kubelet/pods/15025a99-7d04-43e3-8319-51f4a350663f/volumes" Feb 23 11:33:13 crc kubenswrapper[4904]: I0223 11:33:13.255584 4904 scope.go:117] "RemoveContainer" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" Feb 23 11:33:13 crc kubenswrapper[4904]: E0223 11:33:13.256528 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:33:25 crc kubenswrapper[4904]: I0223 11:33:25.256406 4904 scope.go:117] "RemoveContainer" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" Feb 23 11:33:25 crc kubenswrapper[4904]: E0223 11:33:25.258046 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:33:39 crc kubenswrapper[4904]: I0223 11:33:39.256564 4904 scope.go:117] "RemoveContainer" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" Feb 23 11:33:39 crc kubenswrapper[4904]: E0223 11:33:39.257557 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:33:48 crc kubenswrapper[4904]: I0223 11:33:48.011115 4904 scope.go:117] "RemoveContainer" containerID="b2c78a4b3459834db032f649b479091b9295151bdc6ab53780d83bf1afff19f7" Feb 23 11:33:50 crc kubenswrapper[4904]: I0223 11:33:50.256413 4904 scope.go:117] "RemoveContainer" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" Feb 23 11:33:50 crc kubenswrapper[4904]: E0223 11:33:50.257769 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:34:04 crc kubenswrapper[4904]: I0223 11:34:04.255362 4904 scope.go:117] "RemoveContainer" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" Feb 23 11:34:04 crc kubenswrapper[4904]: E0223 11:34:04.256112 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:34:18 crc kubenswrapper[4904]: I0223 11:34:18.255926 4904 scope.go:117] "RemoveContainer" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" Feb 23 11:34:18 crc kubenswrapper[4904]: E0223 11:34:18.257112 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:34:32 crc kubenswrapper[4904]: I0223 11:34:32.256056 4904 scope.go:117] "RemoveContainer" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" Feb 23 11:34:32 crc kubenswrapper[4904]: E0223 11:34:32.256872 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:34:43 crc kubenswrapper[4904]: I0223 11:34:43.255508 4904 scope.go:117] "RemoveContainer" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" Feb 23 11:34:43 crc kubenswrapper[4904]: E0223 11:34:43.256294 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:34:55 crc kubenswrapper[4904]: I0223 11:34:55.255626 4904 scope.go:117] "RemoveContainer" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" Feb 23 11:34:55 crc kubenswrapper[4904]: E0223 11:34:55.256862 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:35:10 crc kubenswrapper[4904]: I0223 11:35:10.255200 4904 scope.go:117] "RemoveContainer" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" Feb 23 11:35:10 crc kubenswrapper[4904]: E0223 11:35:10.255969 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h4l4k_openshift-machine-config-operator(91cb76d8-4bf9-49e5-b51a-c55794ba0cec)\"" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" Feb 23 11:35:25 crc kubenswrapper[4904]: I0223 11:35:25.255938 4904 scope.go:117] "RemoveContainer" containerID="7f16125921a0ce6ff1e6d227678264e846c2dca815862753951c15ed942ef8e6" Feb 23 11:35:25 crc kubenswrapper[4904]: I0223 11:35:25.593010 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" event={"ID":"91cb76d8-4bf9-49e5-b51a-c55794ba0cec","Type":"ContainerStarted","Data":"d489a333c69ec300f8bf9cb4548737c7a979a5026882f02204bd4da1bc7a6e3e"} Feb 23 11:37:47 crc kubenswrapper[4904]: I0223 11:37:47.398381 4904 patch_prober.go:28] interesting pod/machine-config-daemon-h4l4k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 11:37:47 crc kubenswrapper[4904]: I0223 11:37:47.399086 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h4l4k" podUID="91cb76d8-4bf9-49e5-b51a-c55794ba0cec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"